Definition Definition

How Neural Networks (NNs) Work

A neural network (NNs) is made up of interconnected processing elements (PEs), which are self-adjusting units that are tightly connected with other PEs in the system. A PE receives inputs from the user; each input has a certain weight assigned to it. The weights influence the way the input is processed by the PE, and the PE is capable of autonomously and automatically adjusting the weights of the inputs based on its past experiences. Each PE processes the input and generates a single output signal to other PEs in the network. The system observes the overall pattern of outputs generated by the PEs; this pattern forms the basis for information analysis and retrieval. Thus, unlike other computers, which are programmed to perform a task, neural computers are trained to perform a task. By controlling and fine-tuning the type and flow of information among PEs, the software gradually changes the connections between them, and in this way new information is “learned.” For example, one neural network learned overnight to pronounce 20,000 words; although it spoke like a child at first, it gradually picked up speed and can now speak in full sentences.

Share it: CITE

Related Definitions