Actyx in IT & Production: Decentralised Edge Computing

October 7th 2020 by Maximilian Fischer

This article was originially published in German by IT & Production. Please follow this link to view the original content (Page 12 f.f.).

Manufacturing Systems without nerve centre

A new type of software could soon fundamentally change the current concepts for factory software. On the basis of a decentralised edge computing architecture, the traditionally high demands on scalability, flexibility and reliability comparatively without any problems - at significantly lower investment costs.

Factories are complex systems in which people, materials and machines must be optimally coordinated to produce products for right time in high quality. Thereby the coordination of these processes is not always optimal. Information flows and decisions are mostly analogue and not automated. Here, software can help to control processes better and to improve factories more efficient overall and at the same time more flexible. So software can not only recognise in advance, if a component of a machine breaks down, but also take all necessary steps to coordinate the replacement of the component. Maintenance personnel are informed when they are needed on the machine, spare parts are picked in time, the transport of spare parts is coordinated by fork-lift truck drivers or AGVs, and the repairer is digitally supported by necessary information about repair steps, last maintenance work and digital checklists are supported. The software for control and automation of shop-floor processes must be able to cope with complex decisions, and at the same time a have a high degree of reliability. Because when the software no longer works, then it comes to machine failure and production comes to a standstill.

Costs discourage many

However, many factories shy away from the often high investments and risks associated with the introduction of such shop floor systems. According to a Bitkom study 73% of the factories surveyed see high investment costs as an obstacle to introducing Industry 4.0 approaches. Software solutions do not fail because of ideas and creativity, but often the slowness of the software used. Only when software becomes cheaper, more accessible and more flexible can the breakthrough of Industry 4.0 on a broad basis.

Paradigm shift announced

With decentralised edge computing, a new software paradigm could radically change this market: Here there are no central servers and therefore no individual components which, in the event of failure, will replace the complete system can paralyse. Data is stored locally on Edge devices like IPCs, tablets, scanners or gateways. The software runs where it is used: on the device on the machine or on the human being. The approach promises significantly improved reliability, scalability and flexibility compared to centralised client-server solutions, especially compared to cloud-based systems. In the field of decentralised applications, there have been major technical progress has been made, which has led to the widespread use of this approach now. Four developments are in particular:

  • lower hardware costs,
  • Edge Computing Technologies
  • Decentralised computing
  • Event streaming technologies

Hardware becomes cheaper

The first development is a massive reduction of the costs for reliable and powerful hardware. Since software is executed locally on end devices and data are also stored there, powerful devices are required. Tablets suitable for industrial use and Gateways have become considerably cheaper and can now be purchased for low triple-digit amounts. In addition, almost all automation manufacturers now offer controllers on which, in addition to the PLC logic non-real-time software can also be executed can. This reduces the need for additional retrofit hardware for machines.

Edge Computing

The second development is edge computing itself. This is a trend that has been greatly enhanced by the Internet of Things (IoT). In edge computing, data is processed locally. Thus machine data can be pre-processed and evaluated on a gateway, for example, instead of sending them to having to send a server in order to avoid disturbances or to detect maintenance work. The evaluation can can therefore be carried out quickly and reliably. Classically, the data is then transferred to a central server, which is usually located in the cloud for IoT applications. Calculations are no longer made exclusively centrally, which is why we also speak of distributed computing.

Data exchange without server

Decentralised computing goes one step further, with no central components at all. Computers are connected to a network, data is exchanged directly peer-to-peer and calculations are only carried out decentrally. The failure of one computer does not lead to the failure of the entire system. Decentralised systems are extremely fail-safe and highly scalable. Block chains use this architecture to make decentralised decisions. Decentralised system but also have two disadvantages, they are more difficult to programming and more difficult to operate than central systems. In recent years, there has been enormous progress in communication protocols and development tools, which have made programming easier. This development began in the early 2000s with the BitTorrent movement and in recent years, through the Blockchain and Web3/dweb significantly increased in speed. The operation of decentralised systems is inherently more difficult, as not only a central server has to be updated and monitored, but all computers in the network. Here too, modern DevOps tools for automated updates, integrated logging and debugging provide support. Completely reduce the Complexity is not.

Analyses in Data flow

The fourth trend, which is only indirectly related to the previous technologies, but is very powerful in combination with edge computing and decentralisation: event streaming technologies. Participants of a streaming network can subscribe to and publish data (Pub/Sub). Sender and receiver are decoupled, there can be any number of receivers for one sender and vice versa. This architecture makes it possible to build very modular systems. New receivers and transmitters can be added to the system without adapting existing components. Of particular importance here is the principle that each part of the system sends so-called events, i.e. self-contained data packets describing observed facts and thus not requiring coordination. The interpretation of these events is up to the respective recipient. This type of communication leads to a significant simplification, because all participants have the relevant data stock with its history at their disposal. For example, the current condition of a machine, including maintenance intervals, results from the status changes reported by this machine over time.

Solving well-known problems

The combination of these technologies creates Systems which meet the requirements already mentioned such as reliability, modularity, scalability and flexibility. In the context of factories, such infrastructures enable the rapid implementation of small solutions, for example on a production line. Later on, the solutions can be implemented comparatively easily in terms of functionality and and roll out to other production lines. Investments in highly redundant central servers and network infrastructure are not necessary for the architecture. The only installed infrastructure in the factory are the end-devices that interact with people and machines. The vision of an autonomous, decentrally controlled factory becomes tangible through this technology.

Latest posts