### Abstract

Single-layer, continuous-time cellular neural/nonlinear networks (CNN) are considered with linear templates. The networks are programmed by the template-parameters. A fundamental question in template training or adaptation is the gradient computation or approximation of the error as a function of the template parameters. Exact equations are developed for computing the gradients. These equations are similar to the CNN network equations, i.e. they have the same neighborhood and connectivity as the original CNN network. It is shown that a CNN network, with a modified output function, can compute the gradients. Thus, fast on-line gradient computation is possible via the CNN Universal Machine, which allows on-line adaptation and training. The method for computing the gradient on-chip is investigated and demonstrated.

Original language | English |
---|---|

Pages (from-to) | 111-120 |

Number of pages | 10 |

Journal | Neural Processing Letters |

Volume | 16 |

Issue number | 2 |

DOIs | |

Publication status | Published - Oct 2002 |

### Keywords

- Cellular neural network
- Gradient-method
- Learning
- Training

### ASJC Scopus subject areas

- Artificial Intelligence
- Neuroscience(all)

## Fingerprint Dive into the research topics of 'Gradient computation of continuous-time cellular neural/nonlinear networks with linear templates via the CNN universal machine'. Together they form a unique fingerprint.

## Cite this

*Neural Processing Letters*,

*16*(2), 111-120. https://doi.org/10.1023/A:1019933009505