Monthly Archives: August 2013

OPC Unified Architecture: Enabler of Future Solutions

Introduction

Talking about acceptance and adoption of OPC Unified Architecture we usually focused on uniqueness and remarkable features of this standard with the goal of sending the message: “OPC UA is the best interoperability standard – it is much better then other classic solutions ever available” to the community.  Additionally, we are sending (circulating) this message over and over to other members of our close OPC UA community. My concern is if it is an effective approach because it looks like “the old shoes syndrome” – it is not enough to buy new ones only because there are much better new ones. We should rather say “don’t use your old shoes while jumping onto your windsurfing board only because they are comfortable – it is not very sensible”. On the grounds of my personal experience I am trying to imagine new unexplored yet fields of potential applications of the OPC UA. It is obvious that their number is uncountable, therefore a selection key must be applied. To follow the idea from the introduction above the “ENABLER” seems to be most appropriate to flag the technology, solution, application, model, approach, etc. that OPC UA makes possible.

The Exploration Catalog

To make my dreaming more useful for others, my first attempt is to prepare a series of short articles (a catalog) on new application scopes where OPC UA could be recognized as a prerequisite. All of them have a common title pattern “OPC UA Makes it Possible”. Today the catalog consists of:

  • OPC UA Makes Process Observer Archetype Possible: Process Observer is a kind of a virtual layer, which is a “big picture” of the underlying process layer composed of unit data randomly accessible by means of a unified and standardized interface. It allows the process and business management systems, using international standards of data exchange to share data from plant floor devices. Process Observer is like a bridge connection between the plant-floor control and the process and business management levels.
  • OPC UA Makes Complex Data Access Possible: The Industrial IT domain is an integrated set of ICT systems. System integration means the necessity of the information exchange between them (the nodes of a common domain). ICT systems are recognized as a typical measure of processing information. The main challenge of deploying an Industrial IT solution is that information is abstract – it is knowledge describing a situation in the selected environment, e.g. temperature in a boiler, a car speed, an account balance, etc. Unfortunately machines cannot be used to process abstraction. It is also impossible to transfer abstraction from one place to another.
  • OPC UA Makes Highly Distributed Network Control Systems Possible: Nowadays, modern manufacturing automation systems have to be involved. Usually they consist of numerous different IT systems located at business management/operation and process control levels. To deploy the convergence the systems have to be integrated – must interoperate with each other. From integration we should expect improved performance as a result of synergy and macro optimization effects.
  • OPC UA Makes Global Security Possible: We can observe rapid development of globally scoped applications for domains like health, banking, safety, etc. The globalization process is also observed in control engineering. The secure transfer of process control data over the Internet must, therefore, be addressed as the most important prerequisite of this kind of applications.
  • OPC UA Makes Cloud Computing Possible: Cloud Computing is defined as a method to provide requested functionality as a set of services. Following the Cloud Computing idea and offering control systems as a service, there is required a mechanism created on the service concept and supported abstraction and virtualization – two main pillars of the Cloud Computing paradigm.
  • OPC UA Makes Smart Factory Possible: In this case  “collaboration” is the key word. Analyzing the collaboration needs of the smart factory we must distinguish two dissimilar targets surrounding the factory: humans and applications. To make this collaboration well-defined in the information exchange and behavioral aspects, the collaboration platforms (e.g. SharePoint) and integration measures (OPC UA) must be integrated.
  • OPC UA Makes Smart User Interface Possible: It introduces a concept of semantic HMI that is an approach to relay the interface on discovering the meaning of process data using the metadata provided by plant floor measurement and control devices. Additionally, network-connected HMI needs special security precautions to be applied.
  • OPC UA Makes Production Traceability Possible: To use analyzers and track selected product and its ingredients parameters, complex data must be managed, i.e. created, transmitted, processed, and saved. To be useful, process data must be exposed in the context of well know semantics represented by the metadata.
  • OPC UA Makes Smart Utility Distribution Systems Possible: Following the concept of smart grids, more and more companies decide to start working on smart utility distribution systems (gas, water, chilly water, or even oil) to improve the performance and availability. The process is dispersed geographically and partially managed by independent operators. An active role of ultimate consumer is very important.

All the articles above are OPC UA related. For those looking for more information about this interoperability standard there are two articles providing very basic information, I hope, helping to follow the main topics.

  • OPC UA – Specifications: OPC Unified Architecture is described in a layered set of specifications issued by the OPC Foundation, that are broken into parts. It is purposely described in abstract terms and only in selected parts coupled (mapped) to existing technology on which software can be built. This layering is intentional and helps isolate changes in OPC UA from changes in the technology used to implement it.
  • OPC Unified Architecture: Main Technological Features: It focuses on new features of this interoperability standard including: service oriented architecture, object-oriented information model, abstraction and mapping, security, profiles, robustness.

Looking for the new application scope of OPC UA we must face up to managing team work aimed at exploring new undiscovered areas. On the grounds of experience gained while managing variety of innovative process control and business management projects I can say that their scope definition and budget estimation is always the most challenging task. Typically, if the estimated budget of any project is higher than the other ones, the solution provider is recognized as inefficient in one way or another. But there might be another reason if innovative projects are concerned, i.e. the provider’s know-how and extraordinary experience make a better assessment possible. Better always means higher in this context, so typically it puts the solution provider in an underprivileged position and leads to the “more stupid the better” issue. For an innovative project, the main reason why its critical parameters are hardly predictable is its innovative nature. From the definition, an innovation as a translation of an idea or invention into a product or service that creates value is an exploration into unexplored areas. The leader of the team must, therefore, face up to a high level of uncertainty. The following article provides some insights and a proposal with the goal of mitigating this issue.

  • Embedding Agile Principles as Contract Rules: It proposes a methodology framework that tightly couples agile management (to dynamically control the work scope and time framework) to workload tracking with the goal of maximizing the value for money.

Conclusion

To bring the presented ideas into solutions, more work is required with the aim of preparing comprehensive guidance collecting all that is needed to help deploy them in a real environment. Before trying to figure out what should be done to step forward, the audience of the outcome must be determined. Thus we should address the following needs:

  • For end users – adding solutions to requirements, but limited to feasible ones only
  • For integrators – adding solutions to portfolio, but limited to confirmed ones only
  • For vendors – adding features to products, but limited to required ones only

To create a foundation supporting deployment of the technology in new areas, the effort should be focused on:

  • Feasibilities studies: aimed at describing architecture (re-usable templates) as an interconnection of products making up a structure, product features required to interconnect them in a consistent way, business processes surrounding the solution in question, cost estimation, and solution profitability.
  • Pilot applications: aimed at providing proof of concept and “how to …” cookbook.
  • Best practice guidance: to maintain the quality and minimize application risks.

All the activities on the wish list above require an appropriate business model to happen, but this topic is outside this article scope. Good news is that governments and European Union support innovative projects in some countries, e.g. Poland, making the research and development much cheaper (up to 85% might be refunded). Since the beginning of the financial perspective 2007-2013, Poland has become the largest recipient of support under Cohesion Policy in the history of the European Union. In the financial perspective 2014-2020 the support is expected to be even greater. There are many programs planned with small and medium-sized enterprises as main targets with the priority focused among others on:

  • Research and development of modern technologies
  •  R&D infrastructure
  • Capital for innovation
  • Investments in innovative undertakings
  • Diffusion of innovation
  • Polish economy on the international market

Do not miss this opportunity – you will be welcomed to Poland.

OPC UA Makes Smart User Interface Possible

Modern control systems much appreciate the graphical user interfaces. “A picture is worth a thousand words”, but it seems that the future of Human Machine Interfaces in automation is far beyond that.

As opposed to the SCADA term, a lightweight local user interface of a machine is sometimes referred to as the human-machine interface (HMI) – in this context it is an embedded part of the machine. SCADA, on the other hand, is an all in one software package that consists of tightly coupled components implementing functionality to operate a system as a whole. It is worth noting that in spite of application kind this interface is a place where an interaction between someone responsible for making a decision and something responsible for the decision execution occurs. This post address the question what are the consequences if this interface is used, for example, to start drilling by a CNC machine, in one case, or alternatively to start moving remotely say load of 200MW from one power plant to another one in other case. After all, in both cases the operation can be initiated by pressing a virtual “ACCEPT” button on a touch screen. However, is it a sufficient reason to call this interface as an HMI device in both cases, and what is more important, can we use the same or similar solutions in all circumstances to decrease development and deployment costs?

In any cases, while interacting with a machine or with a system finely we operate a process. To operate effectively we must fulfill the following requirements:

  • Provide a representation of the process behavior and its current state – output interface;
  • Provide sensors to allow entering the operator decision – input interface;

The vendors of modern solutions – that meet highly demanded customer expectation – for this purpose employ 3D graphic, touch screen, voice recognition, motion tracking and many others technologies. However, communication with the user is only one aspect that we must focus on. To recognize others we have to look under the cover.

Automated processes are dynamic and stateful, so the interface has to provide an informative context for decision making. To reach this goal the process behavior must be tracked all the time by processing its variables to optimally adjust the screen content and expose the most important elements in an instant of time. Once there are more and more process variables within the automation systems, one has to choose, how to organize the structure of control system and mappings with the visualization purposes. Each variable can be recognized as a set of attributes: value, quality, timestamp and meaning. First tree attributes can be simply expressed as simple (primitive) or complex numbers and bind to the graphic on the screen in a generic way. The fourth (meaning) attribute is usually assumed that it does not change over the time, and therefore the interface behavior and appearance is designed (hard-coded) to express it in a communicative way. For example, we can distinguish a selected part of the screen to allow operator communicate with a chromatograph analyzer in a pharmacy automation process.

Unfortunately, this design time approach is often too rigid to seamlessly adapt for example exchange of the device by a new one from another vendor. Furthermore, hard-coded approach is useless when we must deal with multifunction devices that use pluggable components and variety of accessories. To avoid this unnecessary design cost and avoid proprietary solutions we need a next generation solution that can be called “Semantic HMI”. Semantic HMI is an approach that relays on discovering the meaning of process variables using the meta-data provided by the plant floor measurement and control devices, like analyzer, PLC, DCS, etc. In this approach the meta-data must be provided as a context for the real-time process data and processed simultaneously by a smart enough semantic HMI.

OPC Unified Architecture technology meets all the requirements, because:

  • It is a platform neutral standard allowing easy embedded implementation
  • It is designed to support complex data types and object models.
  • It is designed to achieve high speed data transfers using efficient binary protocols.
  • It has broad industry support beyond just process automation and is being used in support of other industry standards such as S95, S88, EDDL, MIMOSA, OAGiS.

Connection between HMI, as the decision entrance device, and process control device, as the decision execution device, may engage many technologies (e.g. RS232 serial bus located inside the box containing both, Internet, wireless connection, etc …). Unfortunately, vulnerability of the communication medium is only one measure of the security issues severity. Directly related decision cost and its consequence makes together another measure that must scale the required security robustness. In other words, without authentication of the transferred data, data sources and users we cannot expect and rely on the responsibility. Even in the completely shielded control room of a nuclear power plant, at the end of the day we must know who is responsible for pressing the virtual “ACCEPT” button if any problems occur. On the other hand, can you imagine a message on the screen saying “you must login to continue…” in a really critical situation in places like that.

There are more and more modern solutions of HMIs: advanced graphics, with high resolutions and touch screen, high IPs for front panels, faster CPUs, integration with modern operating systems, etc. However, they must offer much more to be used as a decision entrance device in applications like process control of municipal-wide heat distribution network located in the city of Lodz Poland (750k citizens), supplied from three plants with total thermal output power of 2560MW producing hot water distributed using ~800km of pipes interconnected by ~8000 nodes. In application like that, the most important features are openness to be seamlessly pluggable, visualization flexibility to expose process data in the context of process metadata, and appropriate security precaution to provide selective availability to control functions. It seems that using new standards, like OPC UA and new technologies, mentioned above could cause synergy effect leading to reusable on-the-shelf products withstanding even most demanded requirements.

OPC UA Makes Production Traceability Possible

A primary objective of analyzers is to determine the process state/ behavior by measuring selected physical values that are characteristic for it. Obtained result – process data – is used to control, trace and optimize the production process.

To integrate analyzers into the supervisory control and tracing systems the process data must be transported and unambiguously represent the process and product for parties that are to be interoperable. To meet the above requirement it is proposed to employ OPC Unified Architecture technology that is universally accepted, platform-neutral communication standard.

In 2008 the OPC Foundation announced support for Analyzer Devices Integration into the OPC Unified Architecture and created a working group composed of end users and vendors with its main goal to develop a common method for data exchange and an analyzer data model for process and laboratory analyzers. In 2009 the OPC Unified Architecture Companion Specification for Analyser Devices was released. To prove the concept a reference implementation has been developed containing ADI compliant server and simple client using the Software Development Kid released by the OPC Foundation.

The model described in the specification is intended to provide a unified view of analyzers irrespective of the underlying device. This Information Model is also referred to as the ADI Information Model. As it was mentioned, analyzers can be further refined into various groups, but the specification defines an Information Model that can be applied to all the groups of analyzers.

The ADI Information Model is located above the DI Information Model. It means that the ADI model refers to definitions provided by the DI model, but the reverse is not true. To expand the ADI Information Model, the additional layers shall be provided.

There are variety of analysers groups. however, but the ADI Information Model is generic, and therefore before implementing it in a particular application must be expanded by application specific types and customized by overriding the predefined components.

Appropriate Information Model adaptation and implementation is a basic requirement to offer ADI ready and interoperable products. From the experience gained during development of the reference implementation it can be stated that this process can be accomplished engaging very limited resources. Thanks to the reference implementation and supporting tools like CAS Address Space Model Designer only basic knowledge of the Address Space and Information Model concepts are required.

Because there are a large variety of analyzers types, from various vendors with many different types of data, including complex arrays and structures a real challenge is integration of the analyzers and control, tracing and monitoring systems. Initiatives such as Process Analytical Technology are driving analyzer integration and the best way to accomplish this is via open standards. To address this problem two questions can be distinguished:

  • How to get access to (transport) the process data,
  • How to represent (model) the process data.

OPC Unified Architecture technology meets all the requirements, because:

  • It is a platform neutral standard allowing easy embedded implementation.
  • It is designed to publish real-time, historical and meta data.
  • It is designed to support complex data types and object models.
  • It is designed to achieve high speed data transfers using efficient binary protocols.
  • It has broad industry support beyond just process automation and is being used in support of other industry standards such as S95, S88, EDDL, MIMOSA, OAGiS.

One of the main goals of the OPC Unified Architecture is to provide a consistent mechanism for the integration of process control and enterprise management systems using client/server middle-range archetype. To make systems interoperable, the data transfer mechanism must be associated with a consistent information representation model. OPC UA uses an object as a fundamental notion to represent data and activity of an underlying system. The objects are placeholders of variables, events and methods and are interconnected by references. This concept is similar to well-known object oriented programming (OOP) that is a programming paradigm using “objects” – data structures consisting of fields, events and methods – and their interactions to design computer programs. The OPC UA Information Model provides features such as data abstraction, encapsulation, polymorphism, and inheritance.

The OPC UA object model allows servers to provide type definitions for objects and their components. Type definitions may be abstract, and may be inherited by new types to reflect polymorphism. They may also be common or they may be system-specific. Using the type definitions to describe the exposed by the server information allows:

  • Development against type definition.
  • Unambiguous assignment of the semantic to the expected by the client data.

Having defined types in advance, clients may provide dedicated functionality, for example: displaying the information in the context of specific graphics.

The Information Model is a very powerful concept, but it is abstract and hence, in a real environment, it must be implemented in terms of bit streams (to make information transferable) and addresses (to make information selectively available).

Information exposed by the OPC UA Server is composite. Generally speaking, to select a particular target piece of information a client has two options: random access or browsing. Random access requires that any target entity must have been assigned globally unique address and the clients must know it in advance. We call them well-known addresses. It is applicable mostly to entities defined by standardization bodies. The browsing approach means that clients walk down available paths that build up the structure of information. This process is costly, because instead of pointing out the target, we need to discover the structure of information step by step using relative identifiers. The main advantage of this approach is that clients do not need any prior knowledge of the structure – clients of this type are called generic clients. To minimize the cost, after having found the target, every access to it can use random access. Random access is possible since the browsing path is convertible to a globally unique address using the server services.

Related articles

OPC UA Makes Smart Utility Distribution Systems Possible

Most of us don’t give much thought to the major utilities until one or more do not work or price goes up. In Poland 15% to 20% of generated heat is lost in transit from the manufacturer (Combined Heat & Power plants) to consumers, which gives a value of hundreds of millions of euro a year for several biggest national networks. In most cases, nonrenewable conventional fossil fuel must be used up in order to produce that heat, i.e. natural resources must be depleted and the environment must be polluted.

Following the concept of smart grids, more and more companies decide to start working on smart utility distribution systems (gas, water, chilly water, or even oil) to improve the performance and availability, and enable the consumers to monitor consumption and have effect on its economical use.

An example is the heating system of Warsaw that is the largest centralized district heating system in Poland and one of the largest in the world. Through the district heating network common for the whole city area, it provides heat to almost 19 thousand buildings in Warsaw, thus satisfying ca. 80% of the demand. This municipal heating system consists of almost 1700 km of network. Power transmitted from the sources amounts to ca. 5200 MW. Ca. 10000 GWh of heat is supplied to the consumers via the heating network.

Important components of the heating system that are involved in heat transmission to the customers are (read full case study):

  • Water pumping stations
  • Consumer exchanger substation
  • Heat chambers

Generally speaking, the task of the “smart distribution” is to support all processes that will make improvement in its operational performance possible. Therefore, with the aim of optimizing processes, the solution should provide:

  • Availability management
  • Costs management

Usually the above tasks are contradictory to some extent, e.g. when minimizing the cost we cannot ignore the consumer’s needs.

Optimization is a method of determining the best (optimal) solution. It is a search for an extreme of a certain function from the point of view of a specific criterion (index) (e.g. cost, temperature, time, etc.).

The selection of the indexes depends on many factors, but in any case we need real time and historical data gathered from highly distributed process control devices (PLC, distributed I/O, meters, etc.) to provide optimal process control. In the example described above up to 500 000 values is expected to be measured for this purpose.

In order to make a design and analysis of such an elaborate system possible, it is necessary to distribute certain function groups that are logically relevant to each other, using the compound system concept. A well-defined functionality boundary must be a distinguishing feature of each system of that type. To perform their functions, those systems must communicate creating mutual links.

To fulfill the above requirements of Smart Utility Distribution Systems we need the following subsystems:

  • Optimization: supervisory and optimal control of the real-time processes
  • Telemetry: remote control and data acquisition
  • Repository: database management systems to archive process data

To make this architecture deployable and, next, maintainable some critical issues must be addressed:

  • Openness – components communication is based on a common open standard
  • Unified data access – real-time, historical and metadata must be available to all clients using a common publishing mechanism
  • Complex data – with the goal to protect data integrity, complex process data must be supported
  • Security – the strategic nature of these systems requires appropriate security protection against malicious attack
  • Internet technology – it is obvious that Internet technology must be used on the data transportation level between the systems even if we are going to build a separated private network

In my opinion, the only answer to the question how to meet these requirements is OPC Unified Architecture (OPC UA). It is a set of specifications for the development of software connected such systems as ERP, SAP, GIS, MES or process control systems. These systems are designed for information exchange and they are used for the control and supervision of real-time industrial processes. OPC UA defines the infrastructure modeling concept in order to facilitate the exchange of process data. The whole architecture of the new standard improves and extends the previous OPC (now called classic) capabilities in the field of application security, stability, event tracking and data management, thus improving interoperability of the distributed architecture components.

OPC UA permits easier cooperation and data exchange between the process control and business management layers. It is designed so as to support a wide range of devices from the lowest level with PLCs to the distributed systems dealing with IT management in an enterprise.

It is worth noting that OPC UA technology is based on services and objects. For more than one decade the software authors have been using solutions based on objects and services but those solutions have never been transferred directly to industrial applications. OPC Unified Architecture has become the first standard close to the technological process that is of a dual nature, both object oriented (Object Oriented Architecture – OOA) and service oriented (Service Oriented Architecture – SOA).

The application of the OPC Unified Architecture standard as a foundation for the proposed architecture will enable us to:

  • Standardize communication between component systems
  • Create a consistent information model that is available to all systems and illustrates the system structure
  • Create a database model (metadata) based on a OPC UA information model, thus giving applications that use Repository access not only to process data but also to metadata describing the system objects
  • Provide open solutions, i.e. the possibility of free connection of the next components in the future
  • As OPC UA is Internet technology it could be used to build even global solution

The OPC UA standard allows us to get an open, interoperable and scalable architecture, thus making the development of the infrastructure and its use for other tasks in the future possible. As the proposed architecture is based on the open connectivity standards it provides a framework for the integration of highly distributed “islands of automation” with top-level applications employing the artificial intelligence idea to optimal control of the Distribution Network as a whole.

See also

OPC UA – Specifications

Introduction

OPC Unified Architecture (OPC UA) is described in a layered set of specifications broken into parts. It is purposely described in abstract terms and only in selected parts coupled to existing technology on which software can be built. This layering is intentional and helps isolate changes in OPC UA from changes in the technology used to implement it.

The OPC UA specifications are organized as a multi-part document combined in the following sets:

  • Core specification
  • Access type specification
  • Utility specification

The first set specifies core capabilities of OPC UA. Those core capabilities define the concept and structure of the Address Space and the services that operate on it. The access type set applies those core capabilities to specific models of data access. Like in OPC Classic, there are distinguished: Data Access (DA), Alarms and Conditions (A&C) and Historical Access (HA). A new access mode is provided as a result of introducing the programs concept and aggregation mechanisms. This set also specifies the UA server discovery process. Those mechanisms are not directly dedicated to support data exchange, but play a very important role in the whole interoperability process.

The core set contains the following specifications:

  • Part 1 – Overview and Concepts: presents the concepts and overview of OPC Unified Architecture.
  • Part 2 – Security Model: describes the model for securing interactions between OPC UA clients and servers.
  • Part 3 – Address Space Model: describes an object model that servers use to expose underlying real-time processes to create an OPC UA connectivity space.
  • Part 4 – Services: specifies the services provided by OPC UA servers.
  • Part 5 – Information Model: specifies information representations – types that OPC UA servers use to expose underlying real-time processes.
  • Part 6 – Mappings: specifies transport mappings and data encodings supported by OPC UA.
  • • Part 7 – Profiles: introduces the concept of profiles and defines available profiles that are groups of services or functionality.

The access type set contains the following specifications:

  • Part 8 – Data Access: specifies the use of OPC UA for data access.
  • Part 9 – Alarms and Conditions: specifies the use of OPC UA support for accessing alarms and conditions.
  • Part 10 – Programs: specifies OPC UA support for accessing programs.
  • Part 11 – Historical Access: specifies the use of OPC UA for historical access. This access includes both historical data and historical events.

The utility specification parts contain the following specifications:

  • Part 12 – Discovery: introduces the concept of the Discovery Server and specifies how OPC UA clients and servers should interact to recognize OPC UA connectivity.
  • Part 13 – Aggregates: describes ways of aggregating data.

Overview and Concepts

This part describes the goal of OPC UA and introduces the following models to achieve it:

  • Address Space and information model to represent structure, behavior, semantics, and infrastructure of the underlying real-time system.
  • Message model to interact between applications.
  • Communication models to transfer data over the network.
  • Conformance model to guarantee interoperability between systems.
  • Security model to guarantee cyber security addressing client/server authorization, data integrity and encryption.

Security Model

This part describes the OPC UA security model. OPC UA provides countermeasures to resist threats that can be made against the environments in which OPC UA will be deployed. It describes how OPC UA relies upon other standards for security. The proposed architecture is structured in an application layer and a communication layer. Introduced security policies specify which security mechanisms are to be used. The server uses security policies to announce what mechanisms it supports and the client – to select one of those available policies to be used when establishing the connection.

Address Space

There is no doubt that information technology and process control engineering have to be integrated to benefit from macro optimization and synergy effect. To integrate them, we must make systems interoperable. It causes the necessity of exchanging information, but to exchange information, it has to be represented as computer centric (saveable in a binary memory) and transferable (a stream of bits) data. According to the specification, a set of objects that an OPC UA server makes available to clients as data representing an underlying real-time system is referred to as its Address Space. The breaking feature of the Address Space concept allows representing both real process environment and real-time process behavior – by a unique means, mutually understandable by diverse systems.

Services

The OPC UA services described in this part are a collection of abstract remote procedure calls that is to be implemented by the servers and called by the clients. The services are considered abstract because no particular implementation is defined in this part. The part Mappings describes more specific mappings supported for implementation. Separation of the service definition and implementation allows for harmonization with new emerging technologies by making new mappings.

Information Model

To make the data exposed by the Address Space mutually understandable by diverse systems, the information model part standardizes the information representation as computer centric data. To promote interoperability, the information model defines the content of the Address Space of an empty OPC UA server. This content can be used as a starting browse point to discover all information relevant to any client. Definitions provided in this part are considered abstract because they do not define any particular representation on the wire. To make the solution open for new technologies, the representation mappings are postponed to the part Mappings. The solution proposed in this model is also open to defining vendor specific representations.

Mappings

This part defines mappings between abstract definitions contained in the specification (e.g. in the parts: Information Model, Services, Security Model) and technologies that can be used to implement them. Mappings are organized into three groups: data encodings, security protocols and transport protocols. Different mappings are combined together to create stack profiles.

Profiles

This part describes the OPC UA profiles as groups of services or functionality that can be used for conformance level certification. Individual features are grouped into conformance units, which are further grouped into profiles. All OPC UA applications shall implement at least one stack profile and can only communicate with other OPC UA applications that implement the same stack profile. Servers and clients will be tested against the profiles. Servers and clients must be able to describe which of the features they support.

Data Access

This part describes the information model associated with the Data Access (DA) mode. It particularly includes an additional definition of variable types and a complementary description of Address Space objects. This part also includes additional descriptions of node classes and attributes needed for DA, as well as DA specific usage of services to access process data.

Alarms and Conditions

This part describes the representation of events and alarms in the OPC UA Address Space and introduces the concepts of condition, dialog, acknowledgeable condition, confirmable condition and alarm. To expose above information, it extends the information model defined in other parts and describes alarm specific uses of services.

Programs

This part extends the notion of methods and introduces the concept of programs as a complex, stateful functionality in a server or underlying system that can be invoked and managed by a OPC UA client. The provided definitions describe the standard representation of programs as part of the OPC Unified Architecture information model. The specific use of services is also discussed.

Historical Access

This part describes an extension of the information model associated with Historical Access (HA). It particularly includes additional and complementary definitions of the representation of historical time series data and historical event data. Additionally, this part covers HA specific usage of services to detect and access historical data and events.

Discovery

The main aim of this part is to address the discovery process that allows the clients to first find servers on the network and then find out how to connect to them. This part describes how UA clients and servers interact to exchange information on resources available on the network in different scenarios. To achieve this goal, there are introduced the concepts of a discovery server that is a placeholder of global scope information and a local discovery server, whose main task is to manage information vital to local resources. Finally, this part describes how to discover UA applications when using common directory service protocols such as UDDI and LDAP.

Aggregates

This part specifies the information model associated with aggregates and describes how to compute and return aggregates like minimum, maximum, average etc. Aggregates can be used with base (live) data as well as historical (HA) data. This part also addresses the aggregate specific usage of services.

Related articles

OPC UA Makes Smart Factory Possible

From the historical perspective some key words can be recognized as mile stones of the manufacturing enhancement process. These key words describe the main solution or concept that is specific for consecutives eras of development. So the following words hit the big time in history: microprocessor system, automatic processing (PLC), and redundant high availability solution. Today, to be in fashion, we must provide smart solutions, and finally almost everything is smart. We have smart-cars, smart-grids, smart-buildings, and smart-cities. Therefore we must ask, if it is only a buzzword. Going further: can we imagine smart cigarettes? To be honest, I must say that today we do not need artificial intelligence to smoke things like that, but recently I have learnt that cigarettes may have a button to change their flavor on demand – it seems that we are very close to a keyboard concept. What’s more, today it is required that cigarettes are digitally signed to be traceable – it seems that we are very close to the RFID technology and, finally, the Internet of Things concept. Anyway, giving a right answer to this question is only a matter of the definition of the word smart, but nowadays production of cigarettes, as almost everything, is doubtless a challenging activity and needs a steady improvement of the manufacturing environment to compete successfully on the global market.

Read the story: Smart Factory Deployment Strategy

Smart Factory Deployment Strategy

Related articles

OPC UA Makes Cloud Computing Possible.

For someone accomplishing hundreds of control system projects it is not easy to accept the fact that we have adopted most innovative solutions from business technology. Unfortunately, first a programmable calculator was produced and later after that the programmable controller (PLC) appears, first the personal computer (PC) was used to prepare invoices, and later after that SCADA was deployed on the PC. This post is about adoption of the Cloud Computing concept by the process control industry and requirements that must be fulfilled to apply safely this concept.

The cloud concept becomes more and more popular in the – we call them disdainfully – office suit, but more officially business management applications. Maybe it also could be widely adopted and will give us new arm to further improve manufacturing efficiency index including cost reduction and improve availability of utilities.

Applications are traditionally classified as:

  • Business management
  • Process management

Customers Relationship Management (CMS) is a business management application, but controlling a process using PLC is an example of process management. As a rule we do not try to discover relations and possibility to integrate functionality of applications like that. It is like a myth – they have nothing in common – that’s all. Really? Writing this sentence a concept of Smart Grid comes immediately into my mind, where optimization of energy consumption is located mainly on the customers’ site – energy consumers.

The above example is used to illustrate as the highly distributed measurement environment can be offered as a service.

Cloud Computing is defined as a method to provide a requested functionality as a set of services. There are many examples that cloud computing is really useful to reduce cost and increase robustness. Following the Cloud Computing idea and offering control systems as a service it is required a mechanism created on the service concept and supported abstraction and virtualization – two main pillars of the Cloud Computing paradigm.

In my opinion, it can be obtained as the result of set up this mechanism on the foundation of OPC Unified Architecture (see also OPC Unified Architecture – Main Technological Features) that is out of the box solution derived from the Service Orient Architecture principles. Therefore we can say that it is service centric solution.

Thanks to OPC UA standard we are able to abstract the process control as the OPC UA Address Space implementing selected, process oriented information model. Address Space is very useful to offer selective availability, as a means to manage the process representation and scope of its exposition to the users – OPC UA Clients.

In Cloud Computing concept the virtualization is recognized as possibility to share the services by many users. OPC UA server is a publishing mechanism exposing process data and meta-data to unlimited number of clients, and therefore it fulfills this requirement as well.

Multiuser dynamic and global environment causes a risk of unauthorized access and concerns about how cloud reliability and security could threaten manufacturing stability. Because OPC UA engages public key infrastructure – the strongest widely used authentication mechanism – the process can be protected against any cyber attack.

All the above lead to the sentence that process control community is well equipped to adopt the Cloud Computing and take advantage of new features that open new fields of applications. The only open question is if the process control community is ready to put trust on the new emerging technology.

See also: