This talk provides a brief overview of the main ideas and notions underlying more than fifty years of research in fuzzy set theory introduced by L. A. Zadeh for representing sets with unsharp boundaries induced by granules of information expressed with words. The discussion is organized on the basis of three potential understandings of the grades of membership to a fuzzy set, depending on what the fuzzy set intends to represent: a group of elements clustered by similarity including borderline members, a possibility distribution modeling incomplete information, or a preference profile over a set of potential decisions.
The following issues will be recalled and discussed:
We consider the above issues important for the proper use of fuzzy sets and the development of new applications. Especially it seems that the mathematical notion of a fuzzy set turns out to be useful far beyond their original motivation to represent linguistic information.
Constraint fuzzy interval analysis is based on the representation of fuzzy intervals as a parameterized set of linear functions where the parameters are restricted to the interval [0,1]. This representation allows for fuzzy and possibilistic optimization problems, whose coefficients are fuzzy intervals, to be transformed into a function space, solved in the space of functions, and then transformed back to the space of fuzzy intervals. This approach is similar to the Laplace transform method, which takes an analytic problem (differential equations), transforms the problem into an algebraic problem, solves the algebraic problem, and then transforms back. The analytic problem, which is the focus of this talk is optimization. This talk will demonstrate the efficacy and challenges of using constraint fuzzy interval analysis in fuzzy and possibilistic optimization. In the process, a case is made for considering fuzzy and possibilistic optimization in general, and constraint fuzzy interval optimization in particular, within, a more general mathematical setting of flexible and generalized uncertainty optimization respective. This in turn allows these methods to be a part of a distinct and explicit mathematical structure making it a useful tool in operations research theory, application, and pedagogy.
Real-world decision information is often partially reliable due to partial reliability of the source of information, misperceptions, psychological biases, incompetence etc. Z- information represents an NL-described value of a variable of interest in line with the related NL-described reliability. As. prof. L. Zadeh mentioned, an important attribute of Z-information is informativeness. Z-number is informative if its value has high specificity.
Generally, Z-information processing is based on interval and fuzzy calculus, which suffer from entropy increasing principle. Consequently, the important question arises what level of informativeness of a Z-number is sufficient for an appropriate decision. This problem is crucial when we deal with operation on huge number of Z-numbers, for example, for estimation of major impact of economic events.
In this talk we present some results of investigation on processing of Z-information with sufficient level of informativeness. Some examples of measures of specificity for Z-numbers are shown.
to be completed
In this talk we shall stress that a general definition of aggregation might provoke misleading consequences when it is applied without adapting it into each particular problem. We should take very much into account the objective we want to produce with such an aggregation or index, and introduce apropriate restrictions to assure such an objective within our field of interest. Among other things, we should learn from Statistics to distinguish different families of information compactification, being aware of the nature of inputs and output. In particular, we should learn from linguistics to differentiate when the aggregation refers to objects, labels of a unique property, labels coming from different properties (that might be related or unrelated), or any other context. Each specific case might imply specific properties for our summarization, to be imposed if we want to avoid misleading conclusions. Computational issues together with user demands and limitations should also be taken into account in the design of indices.
Real world applications of fuzzy logic in managerial decision-making in business are not as common as they could be. This talk presents some successful applications of applying fuzzy models and systems in business, including key lessons learned from working with the industry.
A special methodology for analysis and forecasting of time series is presented. It is based on two non-statistical techniques: fuzzy transform (F-transform) and methods of fuzzy natural logic (FNL). The methodology employs decomposition method, namely: the time series is decomposed into 3 components: trend-cycle, seasonal component, and random disturbances. Unlike traditional approaches that assume the trend-cycle to be an a priori given function, the fuzzy transform makes it possible to find arbitrary shape of the trend-cycle. It has been proved that the F-transform can eliminate seasonal component and significantly reduce noise so that estimation of the trend cycle is very credible. Moreover, the computational complexity is low.
Forecasting of the trend-cycle is realized using techniques of Fuzzy Natural Logic (FNL). The learned linguistic description, which characterizes the future course of the trend-cycle is well understandable because of using (fragment of) natural language. The other components are also analyzed and forecasted using combination of the mentioned techniques.
Another outcome of the combination of the F-transform and FNL is the possibility to mine information from time series. Part of this information is again formulated in sentences of natural language.
In many practical situations, we are interested in the value of a quantity y which is difficult or even impossible to measure or estimate directly. For example, we may be interested in the distance to a faraway star or in tomorrow's weather. Since we cannot estimate this quantity directly, a natural idea is to find easier-to-estimate quantities x1, ..., xn which are related to y by a known dependence y = f(x1, ..., xn), and use our estimates for xi to estimate y. Often, estimates xi come in fuzzy form. In this case, a natural idea is to use Zadeh's extension principle to find the fuzzy estimate for y.
In principle, the problem is solved, but from the computational viewpoint, a direct implementation of Zadeh's extension principle often requires too many computational steps. It is desirable to compute y faster. A known way to do it is by using interval computations on alpha-cuts. The first result that we show in this talk is that in many cases, we can further reduce the computation time.
The need to decrease computation time is even more important for type-2 fuzzy sets, which require even more computations. Our second result is that for type-2, a significant speed up is also possible.
Our last result deals with the fact that while traditionally, we only consider Zadeh's extension principle with min as t-norm, in principle, we can consider other t-norms as well. Our third result is that in this case, we can also compute y faster.
Many of these results come from joint papers with Andrzej Pownuk.
With ever increasing data volumes, database and information management systems face new challenges. Four important characteristics of `Big' data, commonly known as the four V's, are huge Volume, large Variety in data formats, high Velocity and Veracity. Veracity refers to the trust one has in the data that are being used. In this talk we want to address novel techniques in computational intelligence, based on fuzzy set theory, for the proper handling of veracity problems. Topics being presented include the adequate presentation and handling of data imperfections, data quality assessment and result visualization in (geographic) decision support systems. As case study, aspects from the Belgian Federal project Transnational and Integrated Long-term marine Exploitation Strategies (TILES) will be presented.
As electronic medical records become the norm for documenting medical history of patients, reuse of the data generated during the care process opens new ways of supporting clinical decisions. Advanced data analysis techniques, machine learning and data mining models that make secondary use of medical data are accepted more and more in clinical applications. Despite the advent of data-driven models, the practitioners find it important to have transparent models whose behavior can be understood well. In this respect, natural language is an effective means for communicating model behavior to the users. We argue that linguistic models based on fuzzy set theory form an excellent bridge between the data-driven modeling and the transparency required by the users in the clinical domain. We discuss three different modeling approaches that use fuzzy set theory to develop models for supporting clinical decisions and improving the care process.
In last twenty-five years, a variety of fuzzy decision making applications could be introduced to industrial environments in a variety of quite different industrial business processes. In the talk an overview of the nature of challenges will be given that appeared in the different phases of the introduction of the technology in the market. Closing remarks will reflect both some lessons learned and some observations regarding the current situation especially in the context of the industrial digitalization.
to be completed
Fuzzy signatures and fuzzy signatures sets are extensions of the original concepts of fuzzy membership degree and fuzzy set. They represent hierarchical of structured multicomponent uncertain descriptors. The keynote talk discusses several aspects of this pair of definitions: placing them in the context of more general fuzzy mathematical constructions and introducing several examples for partly existing and partly potential applications.
While a fuzzy signature is an extension of the original concept of fuzzy membership degree proposed by Zadeh in 1965, it is less general than Goguen's L-fuzzy membership. The first step of generalizing fuzzy sets towards Vector Valued Fuzzy Sets was motivated by a simple problem in material science. (Microscopic images of steel alloys had to be classified.) Its further extension called Fuzzy Signature (FSig) had multiple motivations, coming from various fields of decision support, engineering and computer science. At present there are several fields where fuzzy signatures and fuzzy signature sets (FSS) seem to have successful applications, and further extensions of the idea have also been proposed.
The first part of the keynote deals with mathematical issues: it shows that under reasonable restrictions (dealing with a family of fuzzy signature sets descending from a maximal (idealistic) fuzzy signature model) FSS-s represent a special case of L-fuzzy sets.
After establishing the algebraic structure of FSS a collection of applications is presented. Fuzzy communication has been in the focus of research scenes the end of the 1980-s. One possible way of modeling fuzzy communication is by doing it with FSS descriptors of each communicated situation. The communication and collaboration of a team of robots might be built up on fuzzy signatures. Further enhancement of the efficiency of this model happens by adding a spatial structure to FSS, thus obtaining Fuzzy Situational Maps. Another possible application for 3D FSM-s is the modeling and optimization of a logistic warehouse.
Another line of applications is the description of the condition of old residential buildings (based on experts. assessment reports). Both from the structural and architectural point this has been done on a stock of 19th century buildings in the Hungarian capital Budapest. A further tool based on FSS is the use of Fuzzy Signature State Machines suitable starting model for the optimization of renovation processes, also under various restrictions.
A brief review of ongoing and proposed research project will close the talk.
We discuss several extensions of binary Boolean functions acting on the domain [0; 1]. Formally, there are 16 disjoint classes of such functions, covering a majority of binary functions considered in fuzzy set theory. We introduce and discuss dualities in this framework, stressing the links between different subclasses of considered functions, e.g., the link between conjunctive and implication functions. Special classes of considered functions are characterized, among others, by particular kinds of monotonicity. Relaxing these constraints by considering monotonicity in one direction only, we generalize standard classes of aggregation functions, implications, semicopulas, etc., into larger classes called pre-aggregations, pre-implications, pre-semicopulas, etc. Note that the dualities discussed for the standard classes also relate the new extended classes of pre-functions.
Getting knowledge from massive data is nowadays a primary challenge for information processing. The goal of knowledge discovery from data describing decision situations is to help making better decisions. One of the difficulties in knowledge discovery is a vague character of data due to inconsistency. The Dominance-based Rough set Approach (DRSA) is a methodology for reasoning about vague data, which handles monotonic relationships between condition attributes and decision attributes, typical for data describing decision situations. The origin of the vagueness is inconsistency due to violation of the dominance principle which requires that (assuming a positive monotonic relationship) if object x has an evaluation at least as good as object y on all condition attributes, then it should not get evaluation worse than y on all decision attributes. We show that DRSA is a natural continuation of the Pawlak.s concept of rough set, which builds on the ideas coming from Leibniz, Frege, Boole, Łukasiewicz and Zadeh. We also show that the assumption admitted by DRSA about the ordinal character of evaluations on condition and decision attributes is not a limiting factor in knowledge discovery from data. In particular, it is an obvious assumption in decision problems, like multicriteria classification or ranking, multiobjective optimization, and decision under risk and uncertainty. Moreover, even when the ordering of data seems irrelevant, the presence or the absence of a property can be represented in ordinal terms, because if two properties are related, the presence, rather than the absence, of one property should make more (or less) probable the presence of the other property. This is even more apparent when the presence or the absence of a property is graded or fuzzy, because in this case, the more credible the presence of a property, the more (or less) probable the presence of the other property. This observation leads to a straightforward hybridization of DRSA with fuzzy sets. Since the presence of properties, possibly fuzzy, is the base of information granulation, DRSA can also be seen as a general framework for granular computing. We also comment on stochastic version of DRSA, and on algebraic representations of DRSA, as well as on topology for DRSA.
In the age of big data, business analytics is core competence for managerial decision-making and competition. This talk will discuss a number of efforts in developing business analytics methods and applications with fuzzy logic and soft computing, in areas of associative patterns, classification, recommendation, consumer behaviors, competitive intelligence, information extraction, etc. Then, some directions of further exploration dealing with rich media data in future business will be highlighted.
The consensus reaching problem in group decision making is a discussion and deliberation process that includes a multi stage negotiation. Normally, this problem has been modelled as an iterative process with a number of negotiation rounds. The number of rounds or iterations depends on the consensus level achieved at the end of each iteration. When the consensus level reaches the minimum required, the consensus reaching process terminates. However, in many real world decision situations not only the environment of the process but also the specific parameters of the model can change during the negotiation period. Consequently, there still exists the necessity of developing dynamic consensus reaching models to deal with any change of the problem environment that could affect the decision outcome. Indeed, in the last few years, classical static consensus models have given way to new dynamic approaches that are able to manage parameter variability or to adapt themselves to environment changes. The purpose of this talk is to shed some light on recent evolution of consensus reaching models under dynamic environments and critically analyze their advantages and limitations.