Neural networks are computer models that attempt to imitate the parallel processing that occurs in the human brain.
They are typically viewed as having some number of nodes, each with one or more inputs and one output. These nodes are arranged into multiple layers, with the outputs from one layer acting as the inputs to the next layer. The last layer has a single node, and its output is the output for the entire net. The first layer's inputs are the inputs to the overall network. Each node contains a mathematical function indicating how it transforms its inputs into its output. Typically, neural networks are trained using a machine learning technique rather than having each node's function hand-programmed. This can lead to networks which perform their task adequately well, but without strong explanatory power for how they come to their decisions.
Neural networks have not been as successful as it was at one time hoped they might be. They are useful for automatic categorization problems but have not proven useful for modelling general cognition. In categorization problems, the inputs are features of the item being categorized.