A search engine retrieves information on the Internet for users. Typically a user types in key words and the search engine will display a list of links containing those terms. The search engine may list links based on algorithms designed to give priority to relevancy, or popularity, or fees paid by sites to the search engine company. Generally the methods used by search engines are proprietary and not completely known.
AltaVista was a popular early search engine in 1995, but its slow speed caused it to lose out to Google, which would retrieve information almost instantaneously based on presorted techniques. In the December 2008 Nielsen Online rankings, Google had over 62% of the search traffic on the Internet, and Yahoo! is a distant second. Google, Yahoo and AOL all showed good year-on-year growth, with Microsoft's MSN/Live Search being the biggest loser. Other new search engines have been developed as well, such as Microsoft's Bing (the alternative to MSN/Live Search), and another Web site called DuckDuckGo. Each search engines aims to compete against the more commonly used information retrieval sites, and in doing so, they each typically have their own unique driver or capability that others do not offer.
It is estimated that at least a million lines of code is needed for a working search engine. Two open source search engines are Elasticsearch and Solr. Search engines for the public internet also require an enormous amount of information on crawled Web sites, and "spiders" which continuously crawl the Web to update this information are also required. This means that significant computing and data storage resources are needed.