University of Valladolid students section (Spain)

An OPC server real time simulator based on EcosimPro

Professor Jesus M. Zamarreno

Abstract

Real time simulators are very interesting tools in plenty of situations: they can be used for training of operators, analysis of behaviour of real systems

without compromising the plant, platform for probing new control algorithms or tuning controllers, etc. In this paper, the objective is to build an open simulation architecture that allows accessing and/or modifying the variables included in the simulation from any other tool in a standard way using OPC (OLE for Process Control). The simulator is executed in real time, and in our case, it has been generated from the EcosimPro simulation language.

1. Introduction

Usually, when anybody wants to build a simulation of any particular process, he/she uses a proper simulation tool like ACSL, Simulink, Dymola, Spice, Aspen, etc or even a standard programming language like C, C++, Fortran, etc. When the simulated process needs to be controlled, a common approach is to code the controllers in the same simulation tool. Another approach would be to use another tool (a controller, a SCADA, etc.) to communicate with the simulation, but in this case the difficulty is how to choose the communication protocol to achieve this; it could be done using files to interchange data between the two programs, using sockets, etc., but in any case, the communication is usually a very specific and complicated task.

OPC means OLE for Process Control, it is based on Microsoft technology and is an industrial standard that provides a common interface for communication that allows individual software components to interact and share data. OPC communication is performed as a client/server architecture. The OPC server is the data source (like a hardware device at plant floor) and any OPC-based application can access it for reading/writing any variable provided by the server (Figure 1). It is an open and flexible solution to the classical proprietary driver problem. Nearly all of the world's major providers of control systems, instrumentation, and process control systems are including OPC in their products.

Figure 1. OPC facilitates communication between plant floor devices or databases and custom applications

OPC product acting as a client can access the variables of the simulation. The variables can be accessed using an intuitive browser and selecting the variables by natural names or tags from the tree structure.

One limitation of OPC is that only runs on Microsoft platforms; specifically, Windows NT is recommended, though it can be installed on Windows 95 with DCOM extension, Windows 98 and now in recent Windows 2000.

Several tools exist for programming an OPC application; most of them consist in that they provide classes in C++ that facilitates the development of the application. As the OPC server is better built in C++, for perfect integration of the software, it would be preferable to write the simulation in C++ or select an

advanced simulation tool able to generate C++ code. One of these simulation tools is EcosimPro.

EcosimPro is a continuous system simulator capable of dealing with differential algebraic equations (DAEs). The simulation language (EL) is based on components; that is, it uses the object oriented language ideas, with all its advantages like encapsulation, inheritance, and reusability of components. Thus, EcosimPro is an object oriented modelling tool that provides the description of the model of a system selecting the elemental units that form it from a predefined library and connecting them using ports. EcosimPro, using symbolic handling of the set of equations formed by the non-causal equations of each component and the equations generated by EcosimPro, automatically generates the final model according to the connection topology.

As it has been said before, a simulation written in EcosimPro generates automatically C++ code. So, the only task to be performed to obtain our goal is to integrate the simulation in an OPC server. The result is a program that can be accessed from any OPC client, like a SCADA, to read values of desired variables in a transparent way and with the sampling period selected by the user. If the OPC client is a controller, it also could write the values of the calculated manipulated variables in the OPC server (the simulation) in real time. The different blocks that form the system are represented in Figure 2.

Figure 2. Integration of a real-time simulation into an OPC Server

2. Application Architecture

The OPC Server to be built is a real-time simulator. The simulation can represent any dynamic process represented by differential and algebraic equations. The proposed simulation language is EcosimPro because it generates C++ code that can easily be integrated into a C++ project. The OPC Server has two tasks: integrate the process equations in real time and provide the simulation data as OPC items. As the two tasks must be executed in parallel, the proposed implementation is running the two subprocesses as threads. At the same time that the simulation is running, the OPC Server object can attend OPC client petitions.

Variables provided by the OPC server are structured in a hierarchical way. First of all, the OPC server can be located in a different node than the OPC client, so the root of the OPC server is specified as the node and the server name. Variables (items) are aggregated into groups. Hence, the specification of any variable stored in the OPC server is as follows: Node.Server_name.Group.Item. Items (simulation variables) can be included in groups as convenient. For example, a convenient classification could be either inputs and outputs, or tank, heater, reactor, etc.

Basically, for building an OPC Server, the important steps in relation with the simulation data are to build the namespace, that is, the names and types of the data to be provided by the OPC Server, and to read/write values from/to the simulation whenever requested.

Information interchange between a SCADA (OPC Client) and the proposed OPC-based real time simulator is represented in Figure 3.

Figure 3. Communication between a SCADA and the Simulator

3. Conclusion

In this paper, it has been presented a real-time simulation architecture that takes advantage of the OPC technology to become an open, flexible and robust system that can be used from any OPC-enabled application to interact with the simulation. Writing a robust simulation using an advanced simulation tool like EcosimPro and providing the OPC server capability permits the engineer to perform a variety of experiments and configurations from a SCADA before compromising the real plant.

4. Links

EcosimPro: http://www.ecosimpro.com/

OPC Foundation: http://www.opcfoundation.org/

 

Distillation modelling in a packing tower (aplication to a binary mixture of triclorhoethylene, CHCl=CCl2 and tetraclorhoetylene, CCl2=CCl2).

 

Jose Luis Martinez Gonzalez

The general model based in mass transfer theory (Two Films Layer Theory) can be simplified assumming constant molar flows inside the column, saturated

streams over all the column,therefore, the energy balance can be skipped in the model. The final equations are two partial derivative equations (PDE) which need to be solve simultaneously, one of them is for the liquid phase an the other is for the vapor phase, which rises in countercurrent.

                                                    
                                                      [eq. 1]

                              (1)                              [eq. 2]

In these equations it was assumed that the velocities uL & uV have opposite directions, and also that the vapor phase is the controlling stage in the mass transfer process. ' y* ' is the molar fraction for the vapor which is in equilibrium with the bulk composition in the liquid phase, ' x' .

As well as these equations, it is necessary to include those which model the condenser and reboiler?s behaviour ( 2 EDOs ). The final model consist of 4 PDEs ( 2 for each packing section: 2 in the distillate section, over the feed plate and other 2 in the bottoms section), 2 ordinary differential equations, and some algebraic equations to take into account other relationships inside the column.

The PDE was discretized in the spatial dimension using the finite difference method (also an orthogonal collocation method was tested without any satisfactory results and implementation). The vapor and liquid phase are flowing in countercorrent, therefore, to solve the PDEs was neccesary to adjust the intermediatte discretization points (n = 8).

The graphical scheme followed in the simulation is shown in the next figure. Note the functional division which was made to cathegorize all the components inside the packing column.

The simulation was made using Ecosimpro© (2), which is an object oriented simulation language whose main features are the possibility of dividing each physical system into its basic components. Each element is simulated in its own creating libraries of similar components (into different scientific/engineering fields).Therefore, the future reuse of these components to create more complex system is warrantied. Ecosimpro is capable of generate C++ source code with the final model, and also it has a great computing power, since it is able to solve differential- algebraic equations (DAEs ).

The simulaton was structured as a distillation column which consist of five subcomponents: two packing sections, a feed plate between both packing sections, a total condenser and a partial reboiler.

 

Boundary conditions to solve the PDEs

x1[0] = xD x2[0] = (3)

y1[0] = yeq(xw) =

y2[0] = y1[n+1] [eq. 3-6]

 

The dynamic results have ben fitted to a first order transfer function with delay (using a steady state as, Reflux Ratio(L/V)=0.75; Reboiler duty= 6000 W;xD = 0.97 ;xW = 0.002).

Conclusion: it was shown that the developed simualtion allows an exact and precise study of the overall distillation system. Its use can be aplied as an useful tool to carry out a great amount of trials, showing their results, to design control strategy of the column in a simple,quick and effective way, to extrapolate results, and, to guess situations which have not been done in the pilot plant. The mass transfer parameters can be fitted using a optimization algorithm (as the Simplex method or Levenberg-Marquardt) joined to the dynamic simulation: the algorithm compares the experimental data with those obtained from the simulation, evaluates the objective funtion to minimize (cuadratic deviations), and calculates a new values for the estimated parameters in order to fit better than in the previous iteration.

A control structure consisting of threee PID was also simulated in the column to study its response according to the values of the tunning parameters. The distillate composition was regulated using the reflux ratio; the bottoms composition was controlled manipulating the reboiler duty and a level control was implemented in the reboiler drum.

Composition profile for each discretization point (steady state: RR=0.80; Qreb=6000 Watt)

 

1 In some articles (e.g. Comp. Chem. Eng. 1984 ( 8 No. 1) pp. 43-50) a null temporal derivative is used inthe vapor phase,since it can be neglected and in case of not doing sothe stiffness of the system increase.

2 For more information: http://www.ecosimpro.com

3 The inlet liquid in the bottoms section is a mixture of the liquid which goes down and the feed.

 

Design and Analysis of processes with mass/energy recycle

Jose Luis Martinez Gonzalez

The aim of this report is to show the results obtained through a Master Thesis1. The objective of that is to investigate the behaviour of recycle processes, as well as researching the optimal configuration in the system to reduce the so-called "snowball effect" (Luyben2, 1993), which is a typical phenomenon in recycle system when they are exposed to disturbances in the feed or kinetic. If recycle streams occur in the plant, the procedure for designing an effective "plantwide" control system becomes much less clear and much less studied than for those processes which operates with a cascade of units. The dynamic of processes with recycle streams is poorly understood. Controllability and operability need to be considered during the early stages of the process design. There should be a strong interaction between process design and process control to optimize the final operation of the chemical plant.

Luyben2 has studied and defined the "snowball effect" to the phenomenon on which a small disturbance in the systems can create an amplification of the disturbance in the recycle flow rate. It has been demonstrate that, sometimes, and increase in the fresh feed of 5 % can lead to 100% increase in the flowrate of the recycle stream. The majors disturbances, to which recycle systems are exposed to suffer, are changes in the feed flowrate or composition and the temperature in the reactor, which affects directly to the kinetic of reaction. The choice of a good control configuration can avoid or reduce the snowball effect. The purpose of this work is to study the causes of the snowball effect, and, furthermore, to investigate the conditions to reduce the effects of disturbances in the system dynamic: this study was carried out through a large set of steady and dynamic simulation with the computer aided software available.

The steady simulations showed that there is a value for the conversion (called "critical conversion") over which the system is able to manage with lower recycle flowrates and show less sensitivity to the disturbances. The value of the critical conversion is fundamental, since it provides the clue to find an operable system with good performance to reject the disturbances: all the processes must be designed to operate with the conversion over this value (which is specific of each process). If the process is capable of dealing with higher conversion than this value the snowball effect can be reduced, and the amplification of the perturbances on the system behaviour is less sensitive than in case of working with a lower conversion. It shows a strong non-linear behaviour in the recycle flowrate for different conversions, which causes a difficult contol of the process.

A linear relationship was shown between 'Conversion' and the product '(Recycle_flow*Conversion)' : XA = a + b*(R*XA) 8. R= f(XA)

In the articles published by Luyben (1993), it was suggested to regulate the systen using a variable volume control strategy in the reactor, so that controlling the flowrate in one of the strean inside the control loop (as in the previous figure) the snowball effect can be softened. A new control configuration is suggested in this thesis to improve the results obtained by Luyben's configuration. The result was tested usin simulations tools, and it is based in the author's experience through steady and dynamic simulations of recycle processes.

The set of dynamic simulations suggests a generic rule: " eliminating or evacuating a small amount of the recycled product out of the loop leads the system to smaller recycle flowrates". The proposed scheme is based on eliminating (i.e. making react ) the excess of recycle product over the design point.

The proposed control scheme is based on a molar control in the reactor outlet for the recycled product: in case of suffering a disturbance in the feed flow (DFo) the reactor should operate with a bigger volume (V1 , reaching a new conversion, X*A) to achieve an outlet stream with the same amount of A as before. This strategy is also a variable volume strategy, but with the advantage, over Luyben's one, that it can manage continuous disturbances in the feed flow. In the other hand, Luyben's strategy (control of the outlet flow) acts as an integrator, since a continuos disturbance over/below the design point in the feed can lead to fill up or to empty the reactor.

It show that the main method to avoid the snowball effect is achieved with any control structure which will be able to absorb the disturbances in the reactor, without allowing it to go downstream process into the distillation train. It should be emphasized that snowball effect is undoubtedly a phenomenon which strongly depends on the structure of the control system that has been chosen.

 

MODEL AND SIMULATION OF DISTILLATION COLUMN

Anabel Garcia

In this paper, we will develop a dynamic model and a computer simulation of a essential part of a refinery, a multicomponent distillation column.

The aims of that work is to own a dynamic model simulation of a distillation column to study the performance and to test control systems.

To get that objetives, we fix several steps:

-To study the process unit. The study of a multicomponent column will be done.

-To write the unit dynamic model. The first step, in that study, is to get the equations that rule the dynamic behaviour of the distillation column. Firstly, mass and energy equations for every plate are written, after that, the plate structure is completed with the hydraulic. It is included the dynamic effect of the reboiler, condenser and reflux drum. Finally the main control loops equations are written.

-To simulate it in the ACSL simulation language. With the computer, different situations can be simulated in the actual plant, and to observe the behaviour of the real instruments of process. With this, we can anticipate to differents situations and to know how the process goes to answer to some specific situations.

-To estimate reasonable parameters

-To check this suitable performance

To get final results, we leave the column to evolve till it gets the steady state, ckecking that the process variable values and its behaviour are reasonable. From here, one time the column is at the steady state, we will give steps to some process variable to analyze how they behave.

We develop a mathematical model for a multicomponent, non ideal column with 5 components, non equimolal overflow, and inefficient trays.

A single feed stream is fed as saturated liquid (at its bubblepoint) onto the feed tray NF=26. See figure 1. Feed flow rate is F (m3/h) and composition is zj,f(mole fraction). The overhead vapor is totally condensed in a condenser and flows into the reflux drum, whose holdup of liquid is Md (Kg). The contents of the drum is assumed to be perfectly mixed with composition xj,d. The liquid in the drum is at its bubblepoint. Reflux is pumped back to the top tray (NT=74) of the column at a rate R. Overhead distillate product is removed at a rate D.

We will neglect any delay time (dead time) in the vapor line from the top of the column to the reflux drum and in the reflux line back to the top tray. Notice that ynt is not equal, dynamically, to xd. The two are equal only at steady state.

At the base of the column, liquid bottoms product is remove at a rate B and with a composition xb. Vapor boildup is generated in a reboiler at a rate V. Liquid circulates from the bottom of the column through the tubes in the vertical tube-in-shell reboiler because of the smaller density of vapor-liquid

The column contains a total of 74 theoretical trays. The liquid holdup on each tray including the downcomer is Mn. The liquid on each tray is assumed to be perfectly mixed with composition xn: The holdup of the vapor is assumed to be negligible throughout the system.

The liquid rates throughout the column will no be the same dynamically. They will depend on the fluid mechanics of the tray. Often a simple Francis weir formula relationship is used to relate the liquid holdup on the tray (Mn) to the liquid flow rate leaving the tray (Ln)

where: FL=liquid flowrate over weir

Lw=length of weir

how=height of liquid over weir

Finally, we will not neglect the dynamics of the condenser and the reboiler, because the dynamics of this peripherical equipment are important and must be included in the model.

The assumptions that will make are:

1._Liquid on the tray is perfectly mixed and incompressible

2._Tray vapor holdups are negligible

3._Dynamics of the condenser and the reboiler will be included

4._Vapor and liquid are in thermal equilibrium. A Murphree vapor-phase efficiency will be used to describe the departure from equilibrium:

where: yn,j*=composition of vapor in phase equilibrium with liquid on nth tray with composition xn,j

 

yn,j=actual composition of vapor leaving nth tray

yn-1,jT=actual composition of vapor entering nth tray

En,j=Murphree vapor efficiency for jth component on nth tray

An appropiate vapor-liquid equilibrium relationship, must be used to find yn,j*.

Additional equations include physical property relationships to get densities and enthalpies, a vapor hydraulic equation to calculate vapor flow rates from known tray pressure drops, and a liquid hydraulic relationship to get liquid flow rates over the weirs from tray holdups.

Verification: An important but often neglected part of developing a mathematical model is proving that the model describes the real world situation. At the design stage this sometimes cannot be done because the plant has not yet been built. However, even, in this situation there are usually either similar existing plants or pilot plants from which some experimental dynamic data can be obtained.

 

An Introduction to Parallel Programming

Alberto Pedrosa-Calvo

Abstract

 

Tradicionally, secuential programs hace been used to solve problems. Nowadays the situation has changed. With the actual "low" price of High Performance Computers, we can afford new strategies to obtain the same good results but in much less time. In this article we will try to introduce the reader in the world of Parallel Programming Models and how can they be used to improve our jobs.

1 Parallelism basic essentials

 

Most of humans usually think in a \secuential" way. When a problem is found, the solution always depends on the one who has the problem, who is the only capable of solving it. After reached the solution, sometimes we imagine how easy it had been if the same problem had been attacked by a group of people. There is where parallelism appears. Instead of trying to solve a problem thinking in only one person, imagine a group doing it easier and/or faster.

In the parallel programming field, there exist a lot of languages, tools and libraries which let us express parallelism. In High Performance Computing (HPC) people usually get focused in its expresivity and in its capacity to control even the most particular detail of the implementation. Normally the development effort is very high and the result is architecture-dependient, it means, if we change the machine, the solution can even not work properly or loose its characteristics.

In higher levels, functional levels, details about sinchronization and control are not treated. The programmer expresses parallelism in a natural way and the compiler, helped by the execution system, transforms it to be used with a specifc machine and to obtain the best (not always the maximun) performance.

There is also an intermediate point where the compiler and the programmer works together to get better results. The programmer has to work harder, but can forget about the specifc architecture of the machine in which is working.

In all the levels mentioned aboved the algorithm or program obtained should have:

1. Concurrence: can perform several actions simultaneously. This is esential if the program is going to be executed on a multiprocesor machine.

2. Scalability: its eficiency must increase with the number of processors.

3. Locality: use local memory instead of communicating with other to obtain data.

4. Modularity: little complex entities joinned to obtain a bigger one is essential for the Software Engineering of a secuential or parallel product.

We consider a concurrent program the one which:

- contains several independient processes that can be executed in parallel.

- can operate with more than one device which could operate in parallel with the program execution.

So we can talk about implicit concurrency or explicit concurrency. The last one involves the behaivor designed by the programmer.

Other interesting concepts are secuential program and concurrent program:

+ Secuential program: data declaration and instructions orders are written in a programming language which specifies the secuence of a list of sentences. This is also called process.

+ Concurrent program: two or more secuential programs are specified to be executed concurrently as parallel processes (abstract parallelism). Now we call process the behaivour of the group which is composed of tasks.

Finally we can talk about concurrent, parallel and distributed programs:

* Concurrent program: defines actions to be executed simultaneously.

* Parallel program: concurrent program designed to be run in a computer machine.

* Distributed program: parallel program which will run over a network of autonomous processors without a common main memory, which means that they have to used message passing to share data.

2 Parallel architectures

 

We can consider a conventional computer a single processor (CPU) executing a program stored in a Main Memory (see Figure 2).

   Processor (CPU)   

   Main Memory  

Figure 1: Conventional computer

A Parallel Computer is a group of processors that can work together to solve the same computational problem. Under this wide definition we can include a multi-processor computer or a group of computers interconnected and sharing data through message passing techniques.

Parallel computers can be grouped in [10]:

* Shared memory: The processors use the same main memory, which has the same addresses for all of them (see Figure 2.a). To program this machines, you should load the executing instructions in the internal memory of each processor, and the working data in the common memory. The code running in each processing unit is usually called thread, and the programmer must be very cautious and prevent data to be overwritten before be used by others.

In order to access to the main memory, very specific hardware is required. And we can see structures optimized for working with closer sections of memory, and there appear the concepts NUMA 1 and UMA 2 [8].

 

Figure 2: Parallel computers

* Distributed memory: A group of computers sharing each own memory through an communication network. Information goes from one node to another using message passing techniques (Figure 2.b). Each node executes an independient process; sometimes it needs data from another one and sends a request, waiting until it is answered 3 .

Distributed memory systems are usually easy to program, but the user usually spends more time here than using a shared memory one.

* Shared-Distributed computers: Mixing the two previous concepts we can obtain a machine in which each processor has a memory that can be accessed from others like working in a shared memory machine (Figure 2.c).

Another classification can be in terms of instruction and data streaming, obtainning [7]:

- SISD (Single Instruction Stream-Single Data Stream): a single processor with only one instructions and data sets.

- SIMD (Single Instruction Stream-Multiple Data Stram): the same program running over different data sets simultaneously. Very used in simulations and engineering problems.

- MISD (Multiple Instruction Stream-Single Data Stream): this only appears in specific designed architectures for high security systems.

- MIMD (Multiple Instruction Stream-Multiple Data Stream): a general system with several processing and memory units. Here we can find shared and distributed machines.

3 Conclusion

 

Now we can understand the basic concepts in the world of Parallel Computing and Parallel Programming. From this point and with a specific machine in front of us, we can jump to the different parallel programming models (data parallelism [1, 6, 2], message passing [5, 9]. . . ) and decide which one fits with our problem and situation. A world of opportunities and options will appear, not only in language programmings and tools, but also for programming techniques [11, 3, 4].

1 Non-Uniform Memory Access

2 Uniform Memory Access

3 Message passing can also be used in shared memory machines

References

 

[1] V. Adve, A. Carle, E. Granston, S. Hiranandani, K. Kennedy, C. Koebel, U. Kremer, J. Mellor-Crummey, S. Warren, and C.W. Tseng. Requeriments for data-parallel programming enviroments. IEEE Parallel & Distributed Technology, pages 48-58, Fall 1994.

[2] B. Chapman and H. Zima. Extending hpf for advanced data-parallelism applications. IEEE Parallel & Distributed Technology, pages 59-70, Fall 1994.

[3] A. Gonzalez-Escribano, A.J.C. van Gemund, V. Carde~noso-Payo, J. Alonso-Lopez, D. Martin-Garcia, and A. Pedrosa-Calvo. Measuring the performance impact of SP-restricted programming in shared-memory machines. In Proc. VecPar'2000, pages 715-728, Porto (Portugal), June 2000.

[4] A. Gonzalez-Escribano, A.J.C. van Gemund, V. Carde~noso-Payo, J. Alonso-Lopez, D. Martin-Garcia, and A. Pedrosa-Calvo. Measuring the performance impact of SP-restricted programming on distributed-memory machines. In XI Jornadas de Paralelismo, Granada (Spain), September 2000.

[5] William Gropp, Ewing Lusk, and Anthony Skejellum. Using MPI: Portable Parallel Programming with the Message-Passing Interface. The MIT Press, Cambrige, MA, 1994.

[6] T. Gross, D.R. O'Hallaron, and J. Subhlok. Task parallelism in a high-performance fortan framework. IEEE Parallel & Distributed Technology, pages 16{26, Fall 1994.

[7] Per Brinch Hansen. Studies in Computational Science: Parallel Programmings Paradigms. Prentice Hall, 1995.

[8] K. Hwang. Advanced Computer Architecture: parallelism, scalability, programmability. Series in Computer Science. McGraw Hill, 1993.

[9] Peter ~ S. Pacheco. A User's Guide to MPI. University of San Francisco, San Francisco, CA, March 1998.

[10] B. Wilkinson and M. Allen. Parallel Programing: Techniques and Applications Using Networked Workstations and Parallel Computers. Prantice Hall, 1999.

[11] www. PGamma: Measuring the performance impact of SP programming: Resources and tools.

URL= http://www.infor.uva.es/pnal/arturo/pgamma.

 

FAULT DETECTION IN HYDROELECTRIC POWER PLANTS USING THE SOM NEURAL NETWORK

Sergio Saludes Rodil, Alberto Vargas Alonso, Luis J. de Miguel

1. Abstract

 

Nowadays, interest in Automatic Fault Diagnosis and Identification (FDI) for dynamic systems is rapidly increasing. We present an application of Self-Organising Map neural network, also known as Kohonen's neural network, to this field. There are two ways in which this ANN can be used: quantization error and trajectories over the map.

2. System under study

 

We are developing a neural-network-based fault detection system for a hydroelectric set. A hydroelectric set has two main components: a generator and a turbine. The system under study is a generator in service, that is operated by Iberdrola. The fault detection system must be able to detect and classify faults, also to learn the new situations unknown during training. The major problem for the development of this system is the absence of fault data in the historic files.

We have applied the techniques described to the support bearing of the alternator. Preliminaries results are showed here.

3. Two strategies for fault detection

 

We have applied two strategies in order to detect faults. The first one is based on the quantization error, the second one is based on the trajectories over the map.

3.1. Monitoring the quantification error

 

Neural networks used to detect faults using the quantization error are trained only with free-fault data. When the neural network is trained, all inputs are assigned to a cluster. It is possible to measure the membership degree. This is done through error quantification, whose calculation is related to the distance between the input and the winner neurone.

Since training data represents all possible free-fault states, all inputs whose quantization error is big do not belong to any free-fault state, that is, these inputs are faults. To determine if a quantization error is big, a threshold is used. The threshold is calculated using the largest quantification error of training data.

Figure 1 shows the quantization error for fault data and no fault data. Note threshold violation.

 

Figure 1 This approach is specially suitable when fault data are not available.

3.2. Trajectories over the map

 

The problem of fault detection and identification is more difficult. The SOM should be trained with data describing both normal and abnormal situations. Map units representing faulty states may be labeled according to known examples. The monitoring is based on tracking of the operation point: location of the point on the map indicates the process state and the possible fault type.

Figure 2 shows the trajectory when a fault occurs. This trajectory enters into the zone corresponding to a fault.

Figure 3 shows patterns associated to transitions between states. The first one on the left shows the transition between stopped sate and running state. The figure on the middle reflects the transition between two different power states. Finally, the figure on the right shows the transition between running state and stopped state. Those patterns could be used to detect unknown faults.

figure 2

Figure 3.

4. Acknowledgement

This research project, whose principal researcher is Jose R. Peran, has received economical support from European Union under program FEDER, project 1FD97-0433.

5. References

1. Esa, A., Hollmen, J., Simulla, O., Vesanto, J. (1999), Process Monitoring and Modeling using the Self-Organizing Map, Integrated Computer-Aided Enginnering, Vol. 6, nro 1, pp. 3 - 14.

2. Haykin, S., (1999), Neural Networks, a Comprehensive Foundation, Prentice Hall.

3. Iivarinen, J., Rauhamaa, J., Visa, A., (1999), Unsupervised Segmentation of Surface Defects, Workshop of Texture Analysis in Machine Vision, Oulu, Finland, June 14-15, pp. 53 - 58.

4. Kohonen, T,. (1997), Self-Organizing Maps, Springer-Verlag.

5. Mataix, C., (1975), Turbomaquinas hidraulicas: turbinas hidraulicas, bombas y ventiladores, ICAI.