Development of training algorithms for artificial neural networks and fuzzy logic systems
Three training algorithms for radial basis function neural networks have been developed. The fuzzy means algorithm uses a fuzzy partition of the input space and combines self-organized and supervised learning. For a given fuzzy partition of the input space, the proposed method is able to determine the proper network structure, without using a trial and error procedure. The second method is based on the subtractive clustering technique. Both methods are characterized by low computational complexities. The third training method is based on a specially designed Genetic Algorithm (GA), which is used to auto-configure the structure of the networks and obtain the model parameters. This technique formulates a complete optimization problem, which includes the network structure into the set of free variables that are used to minimize the prediction error. An additional algorithm has been developed for training fuzzy systems from numerical data. The main advantage of the method is the lack of complicated iterative mechanisms and therefore, its implementation is carried out easily. The suggested algorithm employs a fuzzy model with simplified rules, assuming a fuzzy partition of the input space into fuzzy subspaces. The output is inferred by expanding the model into fuzzy basis functions (FBFs), where each FBF corresponds to a certain fuzzy subspace. The number of rules and the respective premise parts (fuzzy subspaces) are determined using the nearest neighbor approach.
Development of evolutionary algorithms
A complete framework has been presented for solving nonlinear constrained optimization problems, based on the line-up differential evolution (LUDE) algorithm which solves unconstrained problems. Linear and/or non-linear constraints are handled by embodying them in an augmented Lagrangian function, where the penalty parameters and multipliers are adapted as the execution of the algorithm proceeds. The LUDE algorithm maintains a population of solutions, which is continuously improved as it thrives from generation to generation. In each generation the solutions are lined up according to the corresponding objective function values. The position of each solution in the line-up is very important, since it determines to what extent the crossover and mutation operators are applied to each solution. A stochastic algorithm has been developed for solving hierarchical multiobjective optimization problems. The algorithm is based on the simulated annealing concept and returns a single solution that corresponds to the lexicographic ordering approach. The algorithm optimizes simultaneously the multiple objectives by assigning a different initial temperature to each one, according to its position in the hierarchy. A major advantage of the proposed method is its low computational cost. This is very critical, especially for on line applications, where the time available for decision making is limited. A new heuristic method for solving instances of the travelling salesman problem. The proposed algorithm uses a variant of the Threshold Accepting method, enhanced with intense Local Search, while the candidate solutions are produced through an Insertion Heuristic scheme.