Gaussian adaptation

From Conservapedia
Jump to: navigation, search

Gaussian adaptation is designed for the maximization of manufacturing yield due to statistical deviation of component values of signal processing systems. The process uses the theorem of Gaussian adaptation stating that:

if the centre of gravity of a high-dimensional Gaussian distribution coincides with the centre of gravity of the part of the distribution belonging to some region of acceptability in a state of selective equilibrium, then the hitting probability on the region is maximal.

The theorem is valid for all regions of acceptability and all Gaussian distributions. It may be used by cyclic repetition of random variation and selection (like the natural evolution). In every cycle a sufficiently large number of Gaussian distributed points are sampled and tested for membership in the region of acceptability. The centre of gravity of the Gaussian is then moved to the centre of gravity of the approved points. Thus, the process converges to a state of equilibrium fulfilling the theorem. A solution is always approximate because the centre of gravity is always determined for a limited number of points.


Gaussian adaptation was used for the first time in 1969 as a pure optimization algorithm making the regions of acceptability smaller and smaller. Since 1970 it has been used for both ordinary optimization and manufacturing yield maximization.

In 1972 Gaussian adaptation was tested on a complex signal processing system. The model of the system included 450 components (each of which had a parameter value) of which 130 were adjustable. Some early tests seemed promising. But then a joker said: “In this project nothing should be left to chance”. This, of course, was a good intention as long as all the stops were pulled out. But there was also a hidden meaning: The algorithm worked at random, and it should therefore not be used in this project.

After one year of hard work with deterministic methods - available at that time - a passable system was found, but because parameter values of components vary in the manufacturing process, only 5% of the manufactured systems were able to meet all requirements according to the specifications.

Those responsible for the project got nervous and caught at a straw; the random algorithm. After two months with the random algorithm, 95% of the systems were acceptable, and the system could now be manufactured and sold at a good profit. The example showed that random algorithms may create an enormous amount of information that is hardly available by other means.

This example was never published, but Kjellström & Taxén, 1981, showed an example with 76 parameters, which is also a large number in this context.

Gaussian adaptation as a model of evolution

It has also been compared to the natural evolution of populations of living organisms. In this case the region of acceptability is preferably replaced by a probability function, s(x), where x is an array of quantitative characters (phenotypes) determining the organism. This is possible because the theorem of Gaussian adaptation is valid for any region of acceptability independent of structure and extension.

Mean fitness may be defined as

  P(m) = integral { s(x) N(m – x) dx }.

Here N is the Gaussian probability density function, p. d. f., of phenotypes and m is the centre of gravity of ditto. Because a Gaussian is the exponential of negative squared phenotypes, it is fairly easily proved that the process maximizes the mean fitness of a large population. The condition for maximal mean fitness is obtained by letting the derivative, dP(m)/dm, become equal to zero. Because the derivative of the exponential is equal to the exponential itself the result becomes proportional to P (m* - m) = 0, where m* is the centre of gravity of the phenotypes of the parents to offspring in the progeny. This leads to the theorem of Gaussian adaptation in its simplest form.

More generally, if m slightly deviates from its optimal position in any arbitrary direction, then mean fitness will be decreased, but may be recovered if determinant of the moment matrix is slightly decreased. According to the information theory due to Claude Shannon this means that the average information (entropy, disorder, diversity) is simultaneously maximal, also meaning that average information is as important to survival as mean fitness.

As long as the ontogenetic program my be seen as a stepwise modified recapitulation of the evolution of a particular individual organism, the central limit theorem states that the sum of contributions from many random steps tend to become Gaussian distributed. A necessary condition for the natural evolution to be able to fulfill the theorem of Gaussian adaptation is that it may push the centre of gravity of the Gaussian towards the centre of gravity of the surviving individuals. The Hardy-Weinberg law may accomplish this. In this case the rules of genetic variation such as crossover, inversion, transposition etcetera may be seen as random number generators for the phenotypes. So, in this sense GA may be seen as a genetic algorithm.

A remarcable thing seems to be that a Gaussian distribution is the most disordered distribution as compared to all other distributions having the same moment matrix, and is connected to some remarcable theorems such as the central limit theorem, the Hardy-Weinberg law, the entropy law and the theorem of gaussian adaptation. Those theorems make it possible to centre a gene pool in an arbitrary region of acceptability so as to maximize mean fitness, which is an extremely difficult problem in a high-dimensional space.

So, we may perhaps ask if this combination of chaos and mathematical order makes evolution a tool for intelligent design, with the intelligence represented by the mathematical theorems?

How to climb a mountain

Mean fitness may be calculated provided that the distribution of parameters and the structure of the landscape is known. The real landscape is not known, but figure below shows a fictitious profile (blue) of a landscape along a line (x) in a room spanned by such parameters. The red curve is the mean based on the red bell curve at the bottom of figure. It is obtained by letting the bell curve slide along the x-axis, calculating the mean at every location. As can be seen, small peaks and pits are smoothed out. Thus, if evolution is started at A with a relatively small variance (the red bell curve), then climbing will take place on the red curve. The process may get stuck for millions of years at B or C, as long as the hollows to the right of these points remain, and the mutation rate is too small.


If the mutation rate is sufficiently high, the disorder or variance may increase and the parameter(s) may become distributed like the green bell curve. Then the climbing will take place on the green curve, which is even more smoothed out. Because the hollows to the right of B and C have now disappeared, the process may continue up to the peaks at D. But of course the landscape puts a limit on the disorder or variability. Besides - dependent on the landscape - the process may become very jerky, and if the ratio between the time spent by the process at a local peak and the time of transition to the next peak is very high, it may as well look like a punctuated equilibrium as suggested by Gould (see Ridley).


  • Bergström, R. M., 1969. An Entropy model of the Developing Brain. Developmental Psychobiology, 2(3): 139-152.
  • Bergström, M. Hjärnans resurser. Brain Books, ISBN 91-88410-07-2, Jönköping, 1992. (Swedish).
  • Bergström, M. Neuropedagogik. En skola för hela hjärnan. Wahlström & Widstrand, 1995. (Swedish).
  • Cramér, H. Mathematical Methods of Statistics. Princeton, Princeton University Press, 1961.
  • Dawkins, R. The Selfish Gene. Oxford University Press, 1976.
  • Eigen, M. Steps towards life. Oxford University Press, 1992.
  • Gaines, Brian R. Knowledge Management in Societies of Intelligent Adaptive Agents. Journal of intelligent Information systems 9, 277-298 (1997).
  • Goldberg, D. E. Genetic Algorithms in Search Optimization & Machine Learning. Addison-Wesley, New York, 1989.
  • Hamilton, WD. 1963. The evolution of altruistic behavior. American Naturalist 97:354-356
  • Hartl, D. L. A Primer of Population Genetics. Sinauer, Sunderland, Massachusetts, 1981.
  • Kandel, E. R., Schwartz, J. H., Jessel, T. M. Essentials of Neural Science and Behavior. Prentice Hall International, London, 1995.
  • Kjellström, G. Network Optimization by Random Variation of component values. Ericsson Technics, vol. 25, no. 3, pp. 133–151, 1969.
  • Kjellström, G. Optimization of electrical Networks with respect to Tolerance Costs. Ericsson Technics, no. 3, pp. 157–175, 1970.
  • Kjellström, G. & Taxén, L. Stochastic Optimization in System Design. IEEE Trans. on Circ. and Syst., vol. CAS-28, no. 7, July 1981.
  • Kjellström, G. On the Efficiency of Gaussian Adaptation. Journal of Optimization Theory and Applications, vol. 71, no. 3, Dec. 1991.
  • Kjellström, G. & Taxén, L. Gaussian Adaptation, an evolution-based efficient global optimizer; Computational and Applied Mathematics, In, C. Brezinski & U. * * Kulish (Editors), Elsevier Science Publishers B. V., pp 267–276, 1992.
  • Kjellström, G. Evolution as a statistical optimization algorithm. Evolutionary Theory 11:105-117 (January, 1996).
  • Kjellström, G. The evolution in the brain. Applied Mathematics and Computation, 98(2-3):293-300, February, 1999.
  • Kjellström, G. Evolution in a nutshell and some consequences concerning valuations. EVOLVE, ISBN 91-972936-1-X, Stockholm, 2002.
  • Levine, D. S. Introduction to Neural & Cognitive Modeling. Laurence Erlbaum Associates, Inc., Publishers, 1991.
  • MacLean, P. D. A Triune Concept of the Brain and Behavior. Toronto, Univ. Toronto Press, 1973.
  • Maynard Smith, J. 1964. Group Selection and Kin Selection, Nature 201:1145-1147.
  • Maynard Smith, J. 1998. Evolutionary Genetics. Oxford University Press.
  • Mayr, E. What Evolution is. Basic Books, New York, 2001.
  • Middleton, D. An Introduction to Statistical Communication Theory. McGraw-Hill, 1960.
  • Rechenberg, I. Evolutionsstrategie. Stuttgart: Fromann - Holzboog, 1973.
  • Reif, F. Fundamentals of Statistical and Thermal Physics. McGraw-Hill, 1985.
  • Ridley, M. Evolution. Blackwell Science, 1996.
  • Shannon, C. E. A Mathematical Theory of Communication, Bell Syst. Techn. J., Vol. 27, pp 379–423, (Part I), 1948.
  • Stehr, G. On the Performance Space Exploration of Analog Integrated Circuits. Technischen Universität Munchen, Dissertation 2005.
  • Taxén, L. A Framework for the Coordination of Complex Systems’ Development. Institute of Technology, Linköping University, 2003.
  • Zohar, D. The quantum self : a revolutionary view of human nature and consciousness rooted in the new physics. London, Bloomsbury, 1990
  • Åslund, N. The fundamental theorems of information theory (Swedish). Nordisk Matematisk Tidskrift, Band 9, Oslo 1961.