Tag Archives: Opc server

OPC UA Makes Process Observer Archetype Possible

Integration

Usually modern manufacturing automation systems consist of numerous different IT systems located at business management/operation and process control levels. It is a broad class of application domains where business IT and control systems are converged to make a large whole with the aim of improving performance as a result of the macro optimization and synergy effect. This domain is called Industrial IT. Frequently the systems are distributed geographically among multi-division organizations.

To deploy the above-mentioned convergence the systems have to be integrated – they must interoperate. After integration the systems should make up a consistent system, i.e. each subsystem (as a component) must communicate with the others. The final information architecture is strongly dependent on organization, culture, type of technology and target industrial process. Communication is necessary for exchanging data for production state analysis, operation actions scheduling, supervisory control and task synchronization in the process as a large whole.

To make up a consistent system as an ultimate result of the integration process the following architectures can be applied:

  • Peer to peer: manually created point-to-point links to meet short-term ad hoc objectives.
  • All in one: a product dedicated to both functions: process control and business management.
  • Process Observer: a consistent, homogenous real-time representation of the process control layer.
Process Observer

Fig. 1 Process Observer Archetype

Process Observer (Fig. 1) is a kind of a virtual layer, which is a “big picture” of the underlying process layer composed of unit data randomly accessible by means of a unified and standardized interface. It allows the process and business management systems, using international standards of data exchange to share data from plant floor devices. Process Observer is like a bridge connection between the plant-floor control and the process and business management levels.

Thereby, the structure of the links becomes systematic and the existing functionality of the upper layers is preserved. Using the Process Observer archetype the number of links between components can be substantially reduced and, what is very important it is a linear function of the number of nodes.

Now, the links can be used to gather the process data in a unified, standardized way (see fig.2).

Process Observer archetype greatly reduces the whole complexity and decreases interdependence by decoupling application associations and underlying communication routes. Additionally, it allows applying systematic design methodology and building information architecture independently of the underlying communication infrastructure.

Process Observer Deployment

Implementation

The Process Observer concept has been implemented in the CommServer™ software family. That communication server is optimized for applications in distributed process control systems. To provide a consistent sole representation of a distributed real-time process at the upper layer boundary – according to the model – the CommServer™ has to implement unique functionality, provide redundancy and optimize utilization of underlying communication infrastructure.

CommServer-Process Observer Implementation

Fig 2. CommServer-Process Observer Implementation

Functionality

Communication

To meet scalability and open connectivity requirements, the CommServer™ exposes the OPC Unified Architecture (OPC UA) to be consumed by upper layer applications. One of the main objectives of using the OPC UA is to provide a uniform bridge between digital plant-floor devices and systems providing services at the process and business management level. At the very beginning, this bridge was invented as a translator between vendor specific languages (protocols) used by the devices for data access and a widely accepted one – OPC. Therefore, each OPC UA server has to be equipped with a vendor specific component called DataProvider that implements selected protocol and communication infrastructure management functions. Popularity of the OPC UA standard grows, but still many applications do not support it. For that reason, another member of the family, DataPorter™ offers SQL and XML connectivity (Fig 2).

Process Simulation

CommServer™ does not only play the role of a translator and communication engine. Offering the possibility of creating simulators and publishing simulation data in the same way as the process data, the final process representation can be complemented by directly unavailable information obtained by processing current and historical values. To commence factory approval tests of any system, we need to build a testing environment. Using simulators instead of communication drivers, it is possible to seamlessly switch between production and test environments reducing the cost by order of magnitude.

Resource Monitoring

In a production environment, monitoring and management of the recourses that make up the information processing and communication infrastructure is often of the same importance as access to the real time process data. CommServer™ allows for publishing data gathered from the active network devices in the same way as the process data.

Server to Server Interactions

It is a scenario using interactions in which one Server acts as a Client of another Server. In the presented architecture it is implemented using a dedicated OPC Classic or OPC UA DataProvider. Server to Server interactions allow for the development of servers that: exchange data with each other on a peer -to-peer or vertical hierarchy basis to offer redundancy, aggregation, concentration or layered data access management.

3 levels of redundancy

Using the Process Observer archetype with only one common component responsible for interconnecting plant floor devices and process and business managements systems creates a single point of failure. To overcome it and eliminate the risk the proposed solution offers tree levels of redundancy to increase availability. They can be applied independently according to an appropriate analysis and assessment of the risk.

  • Hardware: To provide true fault tolerant systems redundant hardware can be used. This solution provides the same processing capacity after a failure as before. We have two options: boxes and components redundancy. The first one is achieved by using a primary server and a backup server. We can also use fault-tolerant hardware designed from the ground by building multiples of all critical components, such as CPUs, memories, disks and power supplies into the same computer in order to ensure reliability. In the event one component fails, another takes over the communication without skipping a beat. The fact of switching from one server to the other should be transparent for the clients.
  • Communication paths: To increase availability the CommServer™ assures redundancy of “data transmission paths”. It is designed to recover from a communication path failure by detecting the failed route and switching to another one if available. Paths redundancy improves robustness, because the same remote unit can be reached using different physical layers to eliminate single point failure dependency. The server is responsible for the selection of a route to transfer the data and to control availability of inactive paths. Duplication of the communication paths may be costly, because data transfer over distributed networks is usually not for free. The crucial feature of paths redundancy is the provision of the path multiplication without the necessity of transferring the same data over the network many times and controlling backup path availability at the same time.
  • Signals: For reliability, this feature allows to define replicated signals. Thus, if one signal fails, a second one is available as one OPC tag. To determine whether a fault has occurred (fault detection) and which one signal is affected (fault isolation) two methods are available: source and statistical ones. Source detection relays on information about signal quality received from a plat floor device. Statistical methods use the confidence level as an interval estimate of a population parameter.

Optimal communication

Engaging of an intermediate component as a driver for plant-floor devices is a middleware archetype used worldwide in thousands of applications.
But to provide a consistent sole representation of a distributed real-time process at the upper layer boundary – according to the model – the CommServer™ has to implement
unique features optimizing utilization of the underlying communication infrastructure:

  • Multi-Protocol Capability: many protocols can be implemented as DataProvider components and plugged-in and utilized simultaneously;
  • Multi-Medium Capability: any physical layer technology can be used to start building a communication stack;
  • Multi-Channel Connectivity: numerous independent communication routes can be activated simultaneously to gather raw process data;
  • Adaptive Retry Algorithm: each protocol retries to acquire data after a communication error, but adapting the number of retries to current conditions allows to increase greatly the whole bandwidth;
  • Adaptive Sampling Algorithm: is responsible for adjusting the plant floor devices sampling rate according to the current process state;
  • Optimal Transfer Algorithm: is responsible for minimizing the difference between the individual process data update rate as required by clients and the current sampling rate of process control units.

Related articles

Advertisements

OPC UA Makes Highly Distributed Network Control Systems Possible

Integration

Nowadays, to be on edge, modern manufacturing automation systems have to be involved. Usually they consist of numerous different IT systems located at business management/operation and process control levels. It is broad class of applications domain where business IT and control systems are converged to make a large whole with the aim to improve performance as the result of the macro optimization and synergy effect. This domain is called Industrial IT. Frequently the systems are distributed geographically in multi-division organizations.

To deploy the mentioned above convergence the systems have to be integrated – must interoperate with each other. From integration we should expect improved performance as a result of synergy and macro optimization effects.

After integration the systems should make up a consistent system, i.e. each subsystem (as a component) must communicate with each others. The final information architecture is strongly dependent on organization, culture, type of technology and target process. Communication is necessary for exchanging data for production state analysis, operation actions scheduling, supervisory control and task synchronization in the process as a large whole.

Vast majority of enterprises declare that difficulties with the integration of the existing systems are the most important obstacle to expand the process control and business management support. Other major integration problems are diversification of the existing systems, their quantity and non-unified data architecture.

Integration process results in Large Scale Distributed Network Control Systems (LSDNCS). Systems belonging to this class are usually created in a multi-step integration process. To succeed, the process has to be governed by a well-defined information and communication architecture.

Integration Models

System integration means necessity of the information exchange. To exchange information we need an association between components. Going further, to instantiate association, i.e. to make the component interoperable, we need at the same time a common:

  • information representation –  a language (data type),
  • underlying communication infrastructure – a transport (protocol + medium),

We must be aware that establishing an association we are actually building information architecture – system structure. It is worth stressing that selection of the architecture development has a great impact on the final robustness, maintainability, expandability, dependability, functionality, and last but not least implementation costs.

Generally we have three possibilities:

Peer to peer approach: Common integration practice is to achieve short-term ad-hoc objectives by manually creating/proprietary dedicated point-to-point links between the subsystems everywhere it is useful (see fig. 1). Using randomly this approach we can establish numerous independent links ((k+n)(n+k-1)/2 where k, n – number of business and process control components appropriately). The number rapidly grows, e.g. it is equal 1770 links for n=10 ; k=50, and finally we have to deal with rapidly growing complexity leading to the communication chaos, which is difficult to be maintained.

In this model the information and communication architectures are closely coupled. This approach is very popular, but adversely affects all of the solution features.

Peer to peer approach

Fig. 1

Totalitarian approach: One option to overcome the communication chaos problem is to use an “all in one” product dedicated to both functions: process control and business management (see presentation). Usually, it is provided as one complex, total system – let’s call it a supper-system. Most of the MES (Manufacturing Execution System) vendors offer theirs products as a panacea for all problems of the chaotic system integration.

Actually supper-system does not solve the problem, it only hides it under a not transparent cover and makes the solution very difficult to expand and vendor related forever.

In this model the system distribution is reduced, and as a consequence many associations can be instantiated on the same platform without necessity to communicate. This model reduces complexity by reducing communication needs.

If strictly observed it could be a dead end.

Process Observer approach: The Process Observer is a consistent, homogenous real-time representation of the process control layer. It is a kind of the virtual layer, which is a “big picture” of the underlying process layer composed of unit data randomly accessible by means of a unified and standardized interface (see presentation). It allows sharing data from plant floor devices by the process and business management systems, using international standards of data exchange. Process Observer is like a bridge connection between the plant-floor control and the process and business management levels.

Thereby, the structure of the links becomes systematic and the existing functionality of the upper layers is preserved. Now, they can gather the process data in a unified, standardized way (see fig.2).

Using the Process Observer archetype the number of links between components can be substantially reduced and, what is very important is a linear function of the number of nodes.

Process Observer model greatly reduces the whole complexity and decrease dependency by decoupling application associations and underlying communication routes. Additionally, it allows applying systematic design methodology and building information architecture independently of underlying communication infrastructure.

Fig. 2

Fig. 2

Related articles

Real-Time Communication for Large Scale Distributed Control Systems (Proceedings of the International Multiconference on Computer Science and Information Technology pp. 849–859 ISSN 1896-7094)

Process and business layers robust integration (white paper)

Communication management in the Process Observer Archetype (Proceedings of the 16th conference “Polish Teletraffic Symposium 2009”)

Large Scale Distributed Process and Business Management Integration (Proceedings of the 14th International Congress of Cybernetics and Systems of WOSC)

OPC UA Makes Production Traceability Possible

A primary objective of analyzers is to determine the process state/ behavior by measuring selected physical values that are characteristic for it. Obtained result – process data – is used to control, trace and optimize the production process.

To integrate analyzers into the supervisory control and tracing systems the process data must be transported and unambiguously represent the process and product for parties that are to be interoperable. To meet the above requirement it is proposed to employ OPC Unified Architecture technology that is universally accepted, platform-neutral communication standard.

In 2008 the OPC Foundation announced support for Analyzer Devices Integration into the OPC Unified Architecture and created a working group composed of end users and vendors with its main goal to develop a common method for data exchange and an analyzer data model for process and laboratory analyzers. In 2009 the OPC Unified Architecture Companion Specification for Analyser Devices was released. To prove the concept a reference implementation has been developed containing ADI compliant server and simple client using the Software Development Kid released by the OPC Foundation.

The model described in the specification is intended to provide a unified view of analyzers irrespective of the underlying device. This Information Model is also referred to as the ADI Information Model. As it was mentioned, analyzers can be further refined into various groups, but the specification defines an Information Model that can be applied to all the groups of analyzers.

The ADI Information Model is located above the DI Information Model. It means that the ADI model refers to definitions provided by the DI model, but the reverse is not true. To expand the ADI Information Model, the additional layers shall be provided.

There are variety of analysers groups. however, but the ADI Information Model is generic, and therefore before implementing it in a particular application must be expanded by application specific types and customized by overriding the predefined components.

Appropriate Information Model adaptation and implementation is a basic requirement to offer ADI ready and interoperable products. From the experience gained during development of the reference implementation it can be stated that this process can be accomplished engaging very limited resources. Thanks to the reference implementation and supporting tools like CAS Address Space Model Designer only basic knowledge of the Address Space and Information Model concepts are required.

Because there are a large variety of analyzers types, from various vendors with many different types of data, including complex arrays and structures a real challenge is integration of the analyzers and control, tracing and monitoring systems. Initiatives such as Process Analytical Technology are driving analyzer integration and the best way to accomplish this is via open standards. To address this problem two questions can be distinguished:

  • How to get access to (transport) the process data,
  • How to represent (model) the process data.

OPC Unified Architecture technology meets all the requirements, because:

  • It is a platform neutral standard allowing easy embedded implementation.
  • It is designed to publish real-time, historical and meta data.
  • It is designed to support complex data types and object models.
  • It is designed to achieve high speed data transfers using efficient binary protocols.
  • It has broad industry support beyond just process automation and is being used in support of other industry standards such as S95, S88, EDDL, MIMOSA, OAGiS.

One of the main goals of the OPC Unified Architecture is to provide a consistent mechanism for the integration of process control and enterprise management systems using client/server middle-range archetype. To make systems interoperable, the data transfer mechanism must be associated with a consistent information representation model. OPC UA uses an object as a fundamental notion to represent data and activity of an underlying system. The objects are placeholders of variables, events and methods and are interconnected by references. This concept is similar to well-known object oriented programming (OOP) that is a programming paradigm using “objects” – data structures consisting of fields, events and methods – and their interactions to design computer programs. The OPC UA Information Model provides features such as data abstraction, encapsulation, polymorphism, and inheritance.

The OPC UA object model allows servers to provide type definitions for objects and their components. Type definitions may be abstract, and may be inherited by new types to reflect polymorphism. They may also be common or they may be system-specific. Using the type definitions to describe the exposed by the server information allows:

  • Development against type definition.
  • Unambiguous assignment of the semantic to the expected by the client data.

Having defined types in advance, clients may provide dedicated functionality, for example: displaying the information in the context of specific graphics.

The Information Model is a very powerful concept, but it is abstract and hence, in a real environment, it must be implemented in terms of bit streams (to make information transferable) and addresses (to make information selectively available).

Information exposed by the OPC UA Server is composite. Generally speaking, to select a particular target piece of information a client has two options: random access or browsing. Random access requires that any target entity must have been assigned globally unique address and the clients must know it in advance. We call them well-known addresses. It is applicable mostly to entities defined by standardization bodies. The browsing approach means that clients walk down available paths that build up the structure of information. This process is costly, because instead of pointing out the target, we need to discover the structure of information step by step using relative identifiers. The main advantage of this approach is that clients do not need any prior knowledge of the structure – clients of this type are called generic clients. To minimize the cost, after having found the target, every access to it can use random access. Random access is possible since the browsing path is convertible to a globally unique address using the server services.

Related articles

OPC UA Makes Smart Utility Distribution Systems Possible

Most of us don’t give much thought to the major utilities until one or more do not work or price goes up. In Poland 15% to 20% of generated heat is lost in transit from the manufacturer (Combined Heat & Power plants) to consumers, which gives a value of hundreds of millions of euro a year for several biggest national networks. In most cases, nonrenewable conventional fossil fuel must be used up in order to produce that heat, i.e. natural resources must be depleted and the environment must be polluted.

Following the concept of smart grids, more and more companies decide to start working on smart utility distribution systems (gas, water, chilly water, or even oil) to improve the performance and availability, and enable the consumers to monitor consumption and have effect on its economical use.

An example is the heating system of Warsaw that is the largest centralized district heating system in Poland and one of the largest in the world. Through the district heating network common for the whole city area, it provides heat to almost 19 thousand buildings in Warsaw, thus satisfying ca. 80% of the demand. This municipal heating system consists of almost 1700 km of network. Power transmitted from the sources amounts to ca. 5200 MW. Ca. 10000 GWh of heat is supplied to the consumers via the heating network.

Important components of the heating system that are involved in heat transmission to the customers are (read full case study):

  • Water pumping stations
  • Consumer exchanger substation
  • Heat chambers

Generally speaking, the task of the “smart distribution” is to support all processes that will make improvement in its operational performance possible. Therefore, with the aim of optimizing processes, the solution should provide:

  • Availability management
  • Costs management

Usually the above tasks are contradictory to some extent, e.g. when minimizing the cost we cannot ignore the consumer’s needs.

Optimization is a method of determining the best (optimal) solution. It is a search for an extreme of a certain function from the point of view of a specific criterion (index) (e.g. cost, temperature, time, etc.).

The selection of the indexes depends on many factors, but in any case we need real time and historical data gathered from highly distributed process control devices (PLC, distributed I/O, meters, etc.) to provide optimal process control. In the example described above up to 500 000 values is expected to be measured for this purpose.

In order to make a design and analysis of such an elaborate system possible, it is necessary to distribute certain function groups that are logically relevant to each other, using the compound system concept. A well-defined functionality boundary must be a distinguishing feature of each system of that type. To perform their functions, those systems must communicate creating mutual links.

To fulfill the above requirements of Smart Utility Distribution Systems we need the following subsystems:

  • Optimization: supervisory and optimal control of the real-time processes
  • Telemetry: remote control and data acquisition
  • Repository: database management systems to archive process data

To make this architecture deployable and, next, maintainable some critical issues must be addressed:

  • Openness – components communication is based on a common open standard
  • Unified data access – real-time, historical and metadata must be available to all clients using a common publishing mechanism
  • Complex data – with the goal to protect data integrity, complex process data must be supported
  • Security – the strategic nature of these systems requires appropriate security protection against malicious attack
  • Internet technology – it is obvious that Internet technology must be used on the data transportation level between the systems even if we are going to build a separated private network

In my opinion, the only answer to the question how to meet these requirements is OPC Unified Architecture (OPC UA). It is a set of specifications for the development of software connected such systems as ERP, SAP, GIS, MES or process control systems. These systems are designed for information exchange and they are used for the control and supervision of real-time industrial processes. OPC UA defines the infrastructure modeling concept in order to facilitate the exchange of process data. The whole architecture of the new standard improves and extends the previous OPC (now called classic) capabilities in the field of application security, stability, event tracking and data management, thus improving interoperability of the distributed architecture components.

OPC UA permits easier cooperation and data exchange between the process control and business management layers. It is designed so as to support a wide range of devices from the lowest level with PLCs to the distributed systems dealing with IT management in an enterprise.

It is worth noting that OPC UA technology is based on services and objects. For more than one decade the software authors have been using solutions based on objects and services but those solutions have never been transferred directly to industrial applications. OPC Unified Architecture has become the first standard close to the technological process that is of a dual nature, both object oriented (Object Oriented Architecture – OOA) and service oriented (Service Oriented Architecture – SOA).

The application of the OPC Unified Architecture standard as a foundation for the proposed architecture will enable us to:

  • Standardize communication between component systems
  • Create a consistent information model that is available to all systems and illustrates the system structure
  • Create a database model (metadata) based on a OPC UA information model, thus giving applications that use Repository access not only to process data but also to metadata describing the system objects
  • Provide open solutions, i.e. the possibility of free connection of the next components in the future
  • As OPC UA is Internet technology it could be used to build even global solution

The OPC UA standard allows us to get an open, interoperable and scalable architecture, thus making the development of the infrastructure and its use for other tasks in the future possible. As the proposed architecture is based on the open connectivity standards it provides a framework for the integration of highly distributed “islands of automation” with top-level applications employing the artificial intelligence idea to optimal control of the Distribution Network as a whole.

See also