Method for the selection of inputs and structure of feedforward neural networks

H. Saxén*, F. Pettersson

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

41 Citations (Scopus)

Abstract

Feedforward neural networks of multi-layer perceptron type can be used as nonlinear black-box models in data-mining tasks. Common problems encountered are how to select relevant inputs from a large set of variables that potentially affect the outputs to be modeled, as well as high levels of noise in the data sets. In order to avoid over-fitting of the resulting model, the input dimension and/or the number of hidden nodes have to be restricted. This paper presents a systematic method that can guide the selection of both input variables and a sparse connectivity of the lower layer of connections in feedforward neural networks of multi-layer perceptron type with one layer of hidden nonlinear units and a single linear output node. The algorithm is illustrated on three benchmark problems.

Original languageEnglish
Pages (from-to)1038-1045
Number of pages8
JournalComputers and Chemical Engineering
Volume30
Issue number6-7
DOIs
Publication statusPublished - 15 May 2006
MoE publication typeA1 Journal article-refereed

Keywords

  • Detection of relevant inputs
  • Neural networks
  • Pruning algorithm

Fingerprint Dive into the research topics of 'Method for the selection of inputs and structure of feedforward neural networks'. Together they form a unique fingerprint.

Cite this