Qualitative Optimization of Fuzzy Causal Rule Bases using - INESC-ID

Qualitative Optimization of Fuzzy Causal Rule Bases using - INESC-ID

Qualitative Optimization of Fuzzy Causal Rule Bases using Fuzzy Boolean Nets1 1, 2 1, 2 João Paulo Carvalho José Tomé 1.INESC-ID, R. Alves Redol 29,...

80KB Sizes 0 Downloads 7 Views

Recommend Documents

Qualitative Modelling of an Economic System using Rule - INESC-ID
Qualitative Modelling of an Economic System using Rule-Based. Fuzzy Cognitive Maps ..... [7] Carvalho,J.P., Tomé, J.A.,

Traffic Signal Optimization and Flow Control using Fuzzy Logic
May 31, 2016 - The spike strip will be engaged once the signal goes red. Once the strip is engaged, if any vehicle attem

Human Motion Detection Using Fuzzy Rule-base - CiteSeerX
rule-base classification scheme based on moving blob regions. This approach first obtains a motion image through the acq

Optimization using surrogate models
May 27, 2015 - Build a posteriori a database of cost function values (xi , fi )i=1,...,N. ▷ Construct a surrogate mode

Qualitative Synthensisation of Nanoparticles Using FTIR
2012, Rawson Philip S. 1984). Michael Faraday provided the first description, in scientific terms, of the optical proper

Fuzzy Clustering Using Kernel Method - CiteSeerX
Classical fuzzy C -means (FCM) clustering is performed in the input space, given the desired number of clusters. Althoug

Causal Mediation Analysis Using R - CRAN.R-project.org
Kenny procedure in mediation, linear models are fitted for both the mediator and ... to the Bayesian posterior distribut

Enriched Uranium Market Portfolio Optimization in Fuzzy - Hikari
Sep 3, 2015 - In the present work forecasting future sales of a company involved in uranium enrichment is considered a p

Quick Compilers Using Peephole Optimization
KEY WORDS: Code generation Compilers Peephole optimization Portability. INTRODUCTION ..... matched against the input in

Using causal loop diagrams for the initialization of stakeholder
Quinn, C.H., Stringer, L.C., 2009. Who's in and why? A typology of stakeholder ... John Wiley &Sons, New York. Videira,

Qualitative Optimization of Fuzzy Causal Rule Bases using Fuzzy Boolean Nets1 1, 2

1, 2

João Paulo Carvalho José Tomé 1.INESC-ID, R. Alves Redol 29, 1000-029 Lisboa, Portugal 2.Instituto Superior Técnico, Universidade Técnica de Lisboa, 1000 Lisboa, Portugal Email: [email protected], [email protected]

ABSTRACT: Fuzzy Causal Rule Bases (FCRb) are widely used and are the most important rule bases in Rule Based Fuzzy Cognitive Maps (RB-FCM) [1][4][5][6]. However, FCRb are subject to several restrictions that create difficulties in their creation and completion. This paper proposes a method to complete and optimize Fuzzy Causal Rule bases using Fuzzy Boolean Net properties as qualitative universal approximators. Although the proposed approach focuses on FCRb, it can be generalized to any fuzzy rulebase. Keywords: Fuzzy Boolean Nets, Fuzzy Causal Relations, Rule base Optimization 1

INTRODUCTION

Rule Based Fuzzy Cognitive Maps (RB-FCM), are a qualitative approach to modeling and simulating the Dynamics of Qualitative Systems (like, for instance, Social, Economical or Political Systems) [4][5][7]. RB-FCM were developed as a tool that can be used by non-engineers /non-mathematicians and eliminates the need for complex mathematical knowledge when modeling qualitative dynamic systems. Fuzzy Causal Relations (FCR) were previously introduced in [1][2][6], and are the most common method to describe the relations between the entities (concepts) of RB-FCM [1][2][6]. FCR are represented and defined through linguistic Fuzzy Causal Rule Bases (FCRb). RB-FCM inference imposes that Fuzzy Causal Rule Bases must be complete and involve only one antecedent (multiple antecedent inference is dealt with internal RB-FCM mechanisms, like the Fuzzy Causal Accumulation operation [1][4][6]). It also imposes certain strict restrictions to the linguistic terms involved in the inference [1][2][6]. Another important characteristic of FCRb is the unusually large number of linguistic terms needed to properly represent the involved relations in typical applications (variables with 11 or 13 linguistic terms are common in FCRb) [4][5]. Since FCR data is usually obtained through several “far from ideal” methods, all of the above characteristics and restrictions mean that extra-special care must be taken with FCRb construction when modeling a RB-FCM. FCR data must often be optimized before it can be used on the RB-FCM. We can divide the data in different main categories and optimize it accordingly to its source:

1

• Expert knowledge o Single expert case: the expert usually expresses knowledge using just a few key rules that must be generalized to all Universe of Discourse (UoD) – one must optimize the FCRb using a completion process; o Multiple expert case: one must optimize the data by combining different and possibly inconsistent opinions (optimization through inconsistency elimination and incongruence resolution), and by rule base completion; • Uncertain and sparse quantitative data (observations, measurements, etc.): rule base optimization involves qualitative rule extraction followed by rule base completion. Several methods have been proposed to address the above (or part of the above) problems. However, although those methods are valid in most problems, they fall short when dealing with FCR optimization for several reasons we present here. To solve the problem of FCR data optimization, we propose the use of Fuzzy Boolean Nets (FBN). FBN have been previously introduced as a hybrid fuzzy-neural technique where fuzziness is an emergent property that gives FBN the capability of extracting qualitative fuzzy rules from quantitative data [16][18][20]. They are also universal approximators [17], and therefore, natural candidates to solve our problem. 2

FUZZY RULE BASE INTERPOLATION AND COMPLETION METHODS

Methods to deal with incomplete (or sparse) rule bases can be divided into three major categories: rule interpolation, analogical inference, and rule base completion – Valerie Cross and Thomas Sudkamp work [8] is an excellent starting point and provide lots of useful references regarding this topic. The first two categories can be considered “on line” in a sense that whenever an input occurs in a region of the UoD not covered by the existing rule base, proximity and similarity to the nearest rules are used to produce an output [8][9][11][13]. On the other hand, “rule base completion” methods are “off line” methods, since additional rules are created before any inference occurs. “On line” methods are not a valid choice to solve the problem of FCRb completion because they cannot comply

This work is partially supported by the FCT - Portuguese Foundation for Science and Technology under project POSI/SRI/47188/2002, and by Fundação ORIENTE – Portuguese Orient Foundation

with the strict linguistic term restrictions necessary to the FCR inference process [1][2][3][4][6]. Therefore we must resort to “off line” rule base completion methods. “Off line” completion techniques can be divided into two categories: those that do not require predetermined fuzzy partitions (linguistic terms) of the input and output domain [21], and those that require them [22][14] [15]. Once again, the strict linguistic term restrictions of FCR prevent the use of the former techniques. Therefore, FCRb completion is confined to one of the variations of the latter technique. In this technique one must add a rule “If X is Ai Then Z is Cj ” for each antecedent linguistic term Ai without reference in the rule base2. A scalar value zj must be generated using either training data or nearby rules. The selected consequent linguistic term Cj will be the one where zj has maximal membership. The variations differ in the way how zj is generated: 1. One can use available training data to learn and generate the rule [15][22]; 2. One can use the neighboring rules Ai-1 and Ai+1 to obtain zj through Region Growing [15]; 3. One can use all rules in the rule base to obtain zj through Weighted Averaging [15]; 4. One can obtain zj through Interpolation by Similarity of available rules [14]; Although these approaches are valid in most problems, they fall short when dealing with the FCR optimization problem for several reasons: • Automatic rule extraction from quantitative data is usually based on TPE systems [15][22] that are incompatible with the FCR linguistic term set restrictions [1][2][6], and that need an unusually large number of training examples in order to produce a complete rule base containing fuzzy variables with 11 or 13 linguistic terms [15]. This is a serious problem when dealing with qualitative data from real-world experts, and in the end one often has to resort to the other completion techniques. Therefore this technique can be used to automatically create rule bases from quantitative data, but is not very adequate to produce complete FCRb; • In general, completion methods do not produce useful results when the data is too sparse, even in linear problems (see section 5). Unfortunately, very sparse raw FCR are pretty common (see section 5); • Region growing techniques simply do not produce good results when completing FCRb obtained from expert knowledge (see section 5). This is due to the fact that completion is too “local”, and rules with a single neighbor maintain its neighbor consequent, which is, as we will show, an undesirable behavior in raw FCRb optimization; • On the other hand, weighted averaging produces undesirable and uncontrollable results in several situations [15](section 5.). This due to the fact that all rules are considered in the completion process (too much global interference); • Region growing and Weighted Averaging can be considered the extreme cases of a technique known as 2

Note that FCR have a single antecedent.



3

Interpolation by Similarity. This technique can be “tailored” to produce much better results than the previous ones according to each case. However, this technique is not automatic or “user transparent” and often needs strong parameterization before can be applied to each particular case. Therefore it does not comply with RB-FCM’s philosophy of accessibility for non-math experts; Finally, these methods do not provide mechanisms to deal automatically with the problem of inconsistent opinions from several experts. Therefore other methods must be used to complement them. FUZZY BOOLEAN NETS

Natural or Biological neural systems have a certain number of features that leads to their learning capability when exposed to sets of experiments from the real outside world. They also have the capability to use the learnt knowledge to perform reasoning on an approximate way. Fuzzy Boolean Nets (FBN) were developed with the goal of exhibiting this kind of behaviour. FBN can be considered a neural fuzzy model where the fuzziness is an inherent emerging property, while in other known models, either fuzziness is artificially introduced on neural nets, or neural components are inserted on the fuzzy systems. In FBN, neurons are grouped into areas. Each area can be associated with a given variable, or concept. Meshes of weightless connections between antecedent neuron outputs and consequent neuron inputs are used to perform If…Then inference between areas. Neurons are binary, and the meshes are formed by individual random connections (just like in nature). Each neuron contains m inputs for each antecedent area, and a upper limit of (m+1)N internal unitary memories (FF), where N is the number of antecedents. This number corresponds to maximum granularity, and can be reduced. It is considered that each neuron’s internal unitary memories (FF) can also have a third state with the “not taught” meaning. As in nature, the model is robust in the sense that it is immune to individual neuron or connection errors (which is not the case of other models, such as the classic artificial neural net) and presents good generalization capabilities. The “value” of each concept, when stimulated, is given by the activation ratio of its associated area (which is given by the relation between active - output “1” - neurons and the total number of neurons). Later developments use the “non-taught” state of FF, and an additional Emotional Layer to deal with validation, and solve dilemmas and conflicting information [19]. 3.1 Inference Inference proceeds in the following way: each consequent neuron samples each of the antecedent areas using its m inputs. Note that m is always much smaller than the number of neurons per area. For rules with N antecedents and a single consequent, each neuron has N*m inputs. FCR rules have a single antecedent, therefore, each consequent neuron will have m inputs. The single operation carried out by each neuron is the combinatorial count of the number of activated inputs from every antecedent (in the single antecedent case,

this operation is reduced to counting the active inputs). Neurons have a unitary memory (FF) for each possible count combination, and its value will be compared with the corresponding sampled value. If the FF corresponding to the sampled value of all antecedents contains a ‘1’, then the neuron output will be ‘1’ (the neuron will be – or remain – activated); if the FF is ‘0’, then the neuron output will be ‘0’. These operations can all be performed with classic Boolean AND/OR operations (any FBN can be implemented in hardware using basic logic gates). As a result of the inference process (which is parallel), each neuron will assume a binary value, and the inference result will be given by the neural activation ratio in the consequent area. It has been proved [18] that, from these neuron micro operations, emerge a macro qualitative reasoning capability involving the concepts (fuzzy variables), which can be expressed as rules of type: IF Antecedent1 is A1 AND Antecedent2 is A2 AND …THEN Consequent is Ci, where Antecedent1, Antecedent2,.., Antecedent2 are fuzzy variables and A1, A2; …, Ci are linguistic terms with binomial membership functions (such as, “small”, “high”, etc.). 3.2 Learning Learning is performed by exposing the net to experiments and modifying the internal binary memories of each consequent neuron according to the activation of the m inputs (per antecedent) and the state of that consequent neuron. Each experiment will set or reset the individual neuron’s binary memories. Since FBN operation is based on random input samples for each neuron, learning (and inference) is a probabilistic process. For each experiment, a different input configuration (defined by the input areas specific samples) is presented to each and every of the consequent neurons, and addresses one and only one of the internal binary memories of each individual neuron. Updating of each binary memory value depends on its selection (or not) and on the logic value of the consequent neuron. This may be considered a Hebbian type of learning [10] if pre and post-synaptic activities are given by the activation ratios. Proof that the network converges to a taught rule, and a more detailed description of the learning process can be found in [20]. It has also been proved [16] that a FBN is capable of learning a set of different rules without cross-influence between different rules, and that the number of distinct rules that the system can effectively distinguish (in terms of different consequent terms) increases with the square root of the number m. Finally, it has been proved that a FBN is a Universal Approximator [17], since it theoretically implements a Parzen Window estimator [12]. This means that these networks are capable of implementing any possible multi-input single-output function of the type: [0,1]nx[0,1]. These results give the theoretical background to establish the capability of these simple binary networks to perform qualitative reasoning and effective learning based on real experiments.

4

QUALITATIVE OPTIMIZATION CAUSAL RULE BASES

OF

FUZZY

The option to choose FBN to optimize raw FCR data was based on the fact that FBN properties as qualitative universal approximators could be used to allow a seamless and data independent optimization process, where rule learning and rule completion would be integrated in a single technique that depends on the data source (single expert, multiple experts, or quantitative data). Moreover we will show that all sources can be used simultaneously. In order to use FBN in the optimization process, the antecedent and consequent linguistic term set of the variables involved in the causal relation must be properly defined a-priori: even knowing that FBN have the capability of automatically extracting linguistic membership functions from raw quantitative data, these membership functions do not abide with the strict restrictions necessary in FCR. Therefore, one cannot use this capability to optimize the FCR. The centroid of each linguistic term membership function must also be made available. The following sections detail the process of FCR optimization using FBN. 4.1 Single Expert Knowledge Optimization Whenever FCR knowledge is obtained from a single expert, all provided rules are obviously considered valid, unless the expert states its uncertainty regarding specific rules. If the expert does not provide a complete rule base then the rule base must be necessarily optimized through completion before it can be used (this is a very common situation due to the high number of linguistic terms usually involved in FCR). FBN mesh based structure gives them a good generalization capability. Even small sized FBN can automatically interpolate values from large areas where training data was missing. For example, a FBN with 128 neurons per area, each with 25 inputs, can properly cover 1 25 =20% of the input area for each provided crisp input [16]. This is a theoretical limit, and in practice we can obtain even better coverage. Therefore we can use such FBN to complete any FCRb that can be described by 5 evenly spaced rules. It is important to note that, even if 5 rules are sufficient to describe the relation, the causal rule base still needs to be completed because 11 or 13 linguistic term sets are common (and necessary) in RB-FCM fuzzy variables [1][4][5][6][7]. The procedure is as follows: 1. Use a FBN with one antecedent and one consequent area. Define 128 neurons per area with 25 inputs each and use maximum granularity. Although a larger FBN could provide a finer approximation degree, these settings provide a good compromise between computer performance and results. 2. For each available expert rule obtain the centroid of the antecedent and consequent linguistic terms (xi ,zk); 3. Use all xi ,zk as training data for the FBN. Since all rules are considered valid, there is no need to use the FBN validation mechanisms and emotional layer. In such a FBN, twenty training epochs are sufficient to produce stable results; 4. After training completion, the FBN behaves as a

qualitative approximator for the FCR in all UoD; 5. Too obtain the consequent of a missing rule (Cj), one has to feed the FBN with the centroid of the antecedent linguistic term of that rule. Since FBN are probabilistic, one should infer the FBN several times and average the results to obtain zj. The chosen consequent, Cj, will be the one where zj has the highest membership degree; 6. Completion is guaranteed as long as at least 5 rules were given by the expert, but even with 3 evenly spaced rules is possible to obtain good results (see section 5). Since FBN provide a way to verify the validity of a certain result (based on the ratio of taught/non taught neurons that were used to infer the result) it is always possible to know how satisfactory is the completion We can see that the overall procedure is similar to previous completion methods, replacing known techniques with FBN learning and inference. There are several obvious advantages in using this approach, like the lack of parameterization or the possibility of evaluating the obtained results, which are important to target users lacking strong mathematical knowledge. Other advantages will be shown in the results section. 4.2 Multiple Expert Knowledge Optimization

Apart from the relative weight of data according to its origin, all the remaining optimization process is maintained. 5

As we have seen, it has been proved that FBN are universal approximators. However, those are theoretical results, where FBN size is not limited by practical applications. Therefore we must see how a FBN with the parameters proposed in section 4 behave in this regard. In Figure 1 we present a 128 neurons (with 25 inputs each) FBN approximation for the non-linear function

f = 0.1 + 0.4 sin (2πx + 1)

We trained the FBN for 40 epochs with the same 6 examples, and obtained an average error of only 4%. The results show the high learning and generalization capabilities of an FBN when dealing with non-linear functions and sparse training data. As a comparison, a TPE system needs 25 regions and 100 different examples to obtain a similar result [15]. If we wanted to translate the FBN results to a fuzzy rule base, the obtained error would be irrelevant even with a very high domain partition. 100% 90%

Multiple expert knowledge optimization differs from the single expert case in step 3: since experts might provide conflicting or incongruent information, one must use the FBN validation mechanisms and emotional layer to minimize their influence. Therefore extra parameterization is required in this step, but the overall approach remains the same.

80% 70% 60%

4.4 Multi-Source Optimization Since quantitative data and expert data are basically handled the same way, it is possible to use both simultaneously. However, since expert knowledge provides at most a single data pair per expert and per rule, while quantitative data can be in the magnitude of thousands, it is necessary to weight the data according to its source during the training process. A simple method to do it is to train the expert knowledge for a number of epochs proportional to the magnitude difference between quantitative and expert data. This process obviously involves some extra parameterization in the FBN training. It is, however, a process that can be easily automated.

FBN

50% 40% 30% 20%

f(x) TrDataSet

10% 0%

4.3 Quantitative Data Optimization When all available raw FCR data results from crisp uncertain measurements and/or observations, then all data is used to train and optimize the FBN. The FBN will behave as a qualitative universal approximator, and the rules can be obtained according to step 5. When using the proposed FBN settings, completion is guaranteed as long as gaps no larger than 20% are left uncovered, but, once again, it is possible to obtain valid complete rule bases even with gaps up to 80%. This is obviously highly dependent on the relation we are modelling (for a gap that size, the relation must be rather linear), but it is a fact that other proposed methods cannot deal with these cases at all (see section 5). Once again, one must note that the FBN is capable of automatically providing a measure on how trustworthy is the completion result.

RESULTS

0%

20%

40%

60%

80%

100%

120%

Figure 1 – Medium sized FBN result as a function approximator with sparse training data Regarding rule base completion, one of the major problems one has to cope when modelling a FCR, is the fact that experts will often only provide the minimum necessary information to describe the relation. For example, an economic expert expressing a qualitative causal relation between Production and Price, could probably simply state the following 2 rules3: • “If Production Increases, Then Price Decreases” • “If Production Decreases, Then Price Increases” These rules are comprehensive enough for a human, and describe a simple offer/demand causal relation. There is no need for the expert to provide more rules since the additional information can be easily generalized by a human. Our problem is how to do it optimally and automatically. Consider a case where eleven different linguistic terms are defined in the fuzzy variable Production = {Decreases_Very_Much, Decreases_Much, Decreases, Decreases_Few, Decreases_Very_Few, Maintains, Increases_Very_Few,…, Increases_Very_Much}. Given that the relation is semantically linear and symmetric, this is 3

Note that this is just a simple example, and does not necessarily expresses a valid real world relation

obviously a pretty simple task for a human, and it should not be difficult to automate the procedure as long as the number or linguistic terms in Price is similar. Let us consider the simplest case, where linguistic terms of Price and Oil_Production are exactly the same (Figure 2). The intended result would certainly be the one expressed in the second column of Table 1. DVM

DM

D

1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 -1

-0.8

-0.6

-0.4

DF DVF IVF IF M

-0.2

0

0.2

I

IM

0.4

0.6

IVM

0.8

1

Figure 2: Membership functions of Price and Production linguistic terms However, even in such a simple example, the use of the completion techniques presented in section 2 will produce substantially different and unsatisfactory results due to the high sparseness degree of the available data. Table 1 presents the obtained results and compares them with the FBN approach we propose. With the Region Growing technique, each new rule is based solely on the closest neighbours. This is an iterative process where a rule consequent value is calculated by the average of its non empty neighbour consequent values. Therefore, due to the sparseness of available data, most rules will maintain the consequent values of that data, and we can see that the results are far from ideal. Weighted Averaging results show that this is a disastrous technique in such sparse rule bases. In this method all consequents are calculated simultaneously. Since only two rules are available, and each rule consequent is based on an average of the existing rules consequents weighted by the similarity (essentially based on the distance) of those rule antecedents [15], each rule basically cancels the other, and

all obtained consequents (except for available data) either represent small variations or the absence of variation. FBN optimization produces the best results, optimal in the centre regions, but far from ideal in the extreme regions of the UoD. This is due to the fact that those regions are not reached by the 20% coverage area of each training example. The immediate solution to avoid this problem is to always extrapolate this kind of knowledge (which forcefully represents a linear relation) to the outer rules by always replacing the linguistic terms provided by the expert for the outer antecedent linguistic terms. Last column of Table 1 shows that FBN will provide the optimal completion using this approach; even if the interval without training data is close to 80% (the other 2 methods will still give inaccurate results). The results show that previously presented completion techniques should not be used when data is too sparse, which often is the case in FCR optimization. The exception lies in the Interpolation by Similarity, with which we could obtain optimal results. However this would imply parameter tailoring for each particular case, therefore not complying with our main goals. The examples we chose represent extreme, but common, modelling FCR situations where the proposed approach can make a difference regarding optimization of raw data bases. Due to lack of space, mainstream example results, like those involving lower sparseness degrees, were not presented. In those cases, FBN optimization still produces good results, but so do most of other methods. The problem with those methods in those situations is that they are still highly dependent on the location of the missing data [15]. In some simple cases (only a couple of missing rules), some of those methods present flaws, while the others behave well, being difficult to automatically select which one to use. Since our approach always relies on FBN properties as qualitative universal approximators, it is immune to these situations, even with non linear and non symmetric relations or with cases where the number or syntax of consequent linguistic terms is not similar to those of the antecedent.

Table 1: Optimization by completion of a highly sparse FCRb using different approaches Production

Decreases Very Much Decreases Much Decreases Decreases Few Decreases Very Few Maintains Increases Very Few Increases Few Increases Increases Much Increases Very Much

Price Optimal Solution IVM (0.90) IM (0.55) I (0.35) IF (0.15) IVF (0.08) M (0) DVF (-0.08) DF (-0.15) D (-0.35) DM (-0.55) DVM (-0.90)

Price Region Growing I (0.35) I (0.35) I (0.35) I (0.35) I (0.35) M (0) D (-0.35) D (-0.35) D (-0.35) D (-0.35) D (-0.35)

Price Weighted Average IF (0.11) IVF (0.08) I (0.35) M (0.03) M (0.02) M (0) M (-0.02) M (-0.03) D (-0.35) DVF (-0.08) DF (-0.11)

Price FBN I I I IF IVF M DVF DF D D D

(0.32) (0.32) (0.30) (0.12) (0.08) (-0.04) (-0.09) (-0.15) (-0.30) (-0.38) (-0.38)

Price FBN with outer rule extrapolation IVM (0.78) IM (0.54) I (0.40) IF (0.12) IVF (0.08) M (0.01) DVF (-0.09) DF (-0.20) D (-0.36) DM (-0.54) DVM (-0.76)

6

GENERALIZATION

Even though we focused on the optimization and completion of Fuzzy Causal rule bases, the methods we presented in this paper can be generalized to all fuzzy rule bases. However, when we generalize the process, we loose performance due to the exponential increase of FBN size when the number of antecedents increases. Our method still has advantages when dealing with uncertain and sparse quantitative data since FBN are still capable of extracting qualitative rules and of creating and validating complete rule bases, with the additional advantage of not needing an a priori linguistic term definition (except if, as with FCR, those linguistic terms must adhere to some restrictions). 7

CONCLUSIONS AND FUTURE DEVELOPMENTS

We have presented a FCRb optimization method based on FBN. This method allows a seamless and data source independent optimization process, with good generalization capabilities, where rule learning and rule completion are integrated in a single technique. In order to accomplish a better generalization in the extreme regions of the UoD, future developments include granularization tuning and the use of the entire support of available rules as a training interval (as opposed to using their centroid). REFERENCES [1] Carvalho, J.P., Tomé, J.A., “Fuzzy Mechanisms For Causal Relations”, Proceedings of the Eighth International Fuzzy Systems Association World Congress, IFSA'99, Taiwan, 1999 [2] Carvalho, J.P., Tomé, J.A., “Interpolated Linguistic Terms: Uncertainty Representation in Rule Based Fuzzy Systems”, Proceedings of the 22nd International Conference of the North American Fuzzy Information Processing Society, NAFIPS2003, Chicago, 2003 [3] Carvalho, J.P., Tomé, J.A., “Interpolated Linguistic Terms”, Proceedings of the 23rd International Conference of the North American Fuzzy Information Processing Society, NAFIPS2004, Banff, Canada, 2004 [4] Carvalho, J.P., Tomé, J.A., “Mapas Cognitivos Baseados em Regras Difusas: Modelação e Simulação da Dinâmica de Sistemas Qualitativos”, PhD thesis, Instituto Superior Técnico, Universidade Técnica de Lisboa, Portugal, 2001 [5] Carvalho, J.P., Tomé, J.A., “Qualitative Modelling of an Economic System using Rule Based Fuzzy Cognitive Maps”, FUZZ-IEEE 2004 - IEEE International Conference on Fuzzy Systems, Budapest, 2004 [6] Carvalho, J.P., Tomé, J.A., ”Rule Based Fuzzy Cognitive Maps - Fuzzy Causal Relations”, Computational Intelligence for Modelling, Control and Automation, Edited by M. Mohammadian, 1999 [7] Carvalho, J.P., Tomé, J.A., ”Rule Based Fuzzy Cognitive Maps – Qualitative Systems Dynamics”, Proceedings of

the 19th International Conference of the North American Fuzzy Information Processing Society, NAFIPS2000, Atlanta, 2000 [8] Cross,V., Sudkamp,T., “Sparse data and rule base completion”, Proceedings of the 22nd International Conference of the North American Fuzzy Information Processing Society, NAFIPS2003, Chicago, 2003 [9] Dubois, D., Prade, H., “On Fuzzy Interpolative Reasoning”, International Journal of General Systems, vol. 28, 1999 [10] Hebb, D., “The Organization of Behaviour: A Neuropsychological Theory”, John Wiley & Sons, 1949 [11] Koczy, L., Hirota, K., “Approximate Reasoning by Linear Rule Interpolation and General Approximation”, International Journal of Approximate Reasoning, Vol 9 (3), 1993 [12] Parzen, E., "On Estimation of a probability density function and mode" Ann. Math. Stat., 33, 1962 [13] Qiao, W.Z, Masaharu, M., Yan, S., “An improvement to Kóczy and Hirota's interpolative reasoning in sparse fuzzy rule bases”, International Journal of Approximate Reasoning, Vol 15 ( 3), 1996 [14] Sudkamp,T., “Similarity, Interpolation and Fuzzy Rule Construction”, Fuzzy Sets and Systems, vol.58, 1993 [15] Sudkamp,T., Hammell,R.J., “Interpolation, Completion, and Learning Fuzzy Rules”, IEEE Transactions on Systems, Man, and Cybernetics, Vol. 24, 2, 1994 [16] Tomé, J.A. and Carvalho, J.P. "Rule Capacity in Fuzzy Boolean Networks”, Proceedings of the 21st International Conference of the North American Fuzzy Information Processing Society, NAFIPS2002, New Orleans, 2002 [17] Tomé, J.A., "Counting Boolean Networks are Universal Approximators" Proceedings of the 1998 Conference of NAFIPS, Florida, 1988 [18] Tomé, J.A., "Neural Activation ratio based Fuzzy Reasoning", Proceedings of the IEEE World Congress on Computational Intelligence, Anchorage, 1988 [19] Tomé, J.A., Carvalho, J.P., “Decision Validation and Emotional Layers on Fuzzy Boolean Networks”, Proceedings of the 23rd International Conference of the North American Fuzzy Information Processing Society, NAFIPS2004, Banff, Canada, 2004 [20] Tomé, J.A., Tomé, R,. Carvalho, J.P., “Extracting Qualitative Rules from Observations – A Practical Behavioural Application”, WSEAS Trans. On Systems, Issue 8, Vol 3., 2004 [21] Ughetto, L., Dubois, D., Prade, H., “Fuzzy Interpolation by Convex Completion of Sparse Rule Bases”, Proc. Of the Ninth IEEE International Conference on Fuzzy Systems, San António, TX, USA, 2000 [22] Wang L.X., Mendel J.M., “Generating fuzzy rules by learning from examples”, IEEE Trans. System, Man and Cybernetics, vol. 22, no. 6, 1992