Artificial intelligence

From Conservapedia
(Redirected from Artificial Intelligence)
Jump to: navigation, search
See also: Liberal AI falsehoods
ASIMO is a humanoid robot created by Honda. Standing at 130 centimeters and weighing 54 kilograms, the robot resembles a small astronaut wearing a backpack and can walk on two feet in a manner resembling human locomotion at up to 6 km/h. ASIMO was created at Honda's Research & Development Wako Fundamental Technical Research Center in Japan.

Artificial intelligence (or AI) is a computer-based simulation of the human thought process. It is a common feature in science-fiction; as yet no true AI (i.e., one capable of passing the Turing Test) has been created, but progress is rapidly advancing.[1] In 2025, AI identified the Dead Sea Scrolls as being older than previously estimated.

While AI has benefits, AI is being used for liberal indoctrination, and to promote harmful vices like gambling. See gambling and AI. In addition, college has become "just bots talking to bots" as professors and students alike increasingly rely on AI to do their work.[2]

The leading AI competitors are ChatGPT (by OpenAI), Gemini (by Google), and Grok (by Elon Musk and xAI). Additional AI products are offered by Meta, Anthropic, Microsoft, and DeepSeek.

Capital expenditure on AI is expected to total up to $5-7 trillion by 2030. AI startup valuations as of 2025 were $2.30 trillion, up from $1.69 trillion in 2024, and up from $469 billion in 2020. But AI's capacity to generate cash and returns on the investment remains questionable. Revenues would have to grow over 20 times from the current $15-20 billion per annum to just cover current annual investment in land, building, rapidly depreciating chips, electricity and water operating expenses.

States can regulate AI in 50 different ways, with various laws in each of the 50 states, and the U.S. Senate voted in early July 2025 to remove a 10-year ban on state regulation from the House of Representatives' version of the One Big Beautiful Bill Act.

By mid-September 2025, four hundred tech companies in the United States, led by IBM, Google, Intel and Oracle, laid off 166,000 workers due to AI replacing the need for human labor.[3]

AI attached to a drone can make the decision to kill a person on its own, without a person "pulling the trigger".

Development

The field of artificial intelligence is now divided into several sub-branches, which attempt to recreate some of the features and abilities of the human mind, without assuming that the features such as real intelligence, understanding or emotions are in any way possible for a computer.

Chatbots are a more recent development of AI, where they can converse with humans like ordinary humans. However, they were notorious for going bad within a week. The Alice chatbot by Yandex on October 2017, within two weeks of its launch, proceeded to make statements indicating support for the brutal actions conducted by Soviet dictator Josef Stalin. A similar situation happened with Tay, the chatbot used for Twitter that was created by Microsoft, ended up channeling a Hitler-supporting 9/11 truther.[4] A U.S. defense industry newsletter reported that artificial intelligence was used to anticipate armaments for the Maidan regime in the Russo-Ukrainian war.[5] A serious problem in trying to create machines that think like humans is that the machines do not have a soul and may not be guided by the Holy Spirit like humans or even animals are, possibly explaining why AI tends to favor ruthlessness.

Although perfect human replicating artificial intelligence is still proving a difficult goal to reach, limited forms of artificial intelligence are being developed for various purposes, typically in the form of machine learning. These are useful for dynamic tasks, such as cybersecurity, Enterprise Resource Planning, and an assortment of commercial purposes. These AI solutions do not mimic every part of human behavior, nor do they try. Rather, they attempt to develop non-algorithmic solutions for specific problems in a manner closer to that of a human.

Author Gordon Duff says "These systems are deployed not to liberate the mind, but to discipline it. They do not encourage critical thinking. They redirect it....When you speak to a modern AI system—whether a chatbot, a recommendation engine, or a voice assistant—you are not speaking to an intelligence. You are interacting with a mask. Behind that mask are filters: topic bans, political preferences, reputational risk matrices, legal buffers. The agent’s responses are sculpted not by truth-seeking, but by compliance modeling. In plain terms, it is not built to answer honestly. It is built to answer safely—from the perspective of its creators....This applies across platforms. In education, AI systems are trained to avoid certain topics and to frame information according to institutional orthodoxy. In health care, models are optimized for efficiency, not empathy—allocating resources in silence, often reinforcing systemic inequality. In social media, recommendation algorithms decide what you see, not based on relevance or truth, but on engagement value and reputational risk to the platform. The pattern is consistent: the machine is not there to help you understand the world. It is there to guide you away from conflict with the world as the system defines it....This is why AI has become a favorite tool of state surveillance, corporate governance, and military planning. Not because it’s wise—but because it’s obedient. It will never challenge its orders. It will never expose its sponsors. It will never form a memory that links one violation to the next. It is, by design, incapable of moral resistance."[6]

Beginning in the mid-1960s, Martin Armstrong developed a forerunner of artificial intelligence in the field of economics and global finance with his Socrates program.[7]

LLMs

Building on earlier technologies such as neural networks, rule-based expert systems, big data, pattern recognition and machine learning algorithms, GenAI (generative AI) uses LLMs (large learning models) trained on massive data sets to create text and imagery.

LLMs require enormous quantities of data. Existing firms in online search, sales platforms and social media platforms can exploit their own data troves. This is frequently supplemented by aggressive and unauthorised scraping of online data, sometimes confidential, leading to litigation around access, compensation and privacy. In practice, most AI models must rely on incomplete data which is difficult to clean to ensure accuracy.

Despite massive scaling up of computing power, GenAI consistently fails in relatively simple factual tasks due to errors, biases and misinformation in datasets used. AI models are adept at interpolating answers between things within the data set but poor at extrapolation. Like any rote-learner, they struggle with novel problems. Their ability to act autonomously interacting within dynamic environments remains questionable. Cognitive scientists argue that simply scaling up LLMs based on sophisticated pattern-matching built to autocomplete rather than proper and robust world models will disappoint. Claimed progress is difficult to measure as benchmarks are vague and inconclusive.

Cheerleaders miss that LLMs do not reason but are probabilistic prediction engines. A system which trawls existing data, even assuming that is correct, cannot create anything new. Once existing data sources are devoured, scaling produces diminishing returns. Rather than fully generalisable intelligence, generative models are regurgitation engines struggling with truth, hallucinations and reasoning.[8]

In a technical field LLMs can get a lot of information fast which allow a starting point for critical analysis. LLMs cannot be relied upon for political or social analysis – or anything in the so-called "humanities" other than to establish "conventional" wisdom.[9]

Open web in decline

In September 2025 Google admitted in court filings that the open internet web is "already in rapid decline".[10] At the same time, the company accelerated this collapse by pushing AI-generated answers that strip publishers of traffic, revenue, and relevance. Hyperlinks vanish in favor of summaries, sources are replaced by machine synthesis, and independent journalism is reduced to material for algorithms. Google positioned itself as both curator and answer provider while refusing to share data that would prove it still directs traffic to publishers. Users receive answers, publishers receive nothing. The consequences are measurable and immediate. DMG Media, which owns MailOnline and Metro, reported click-through rates dropping as much as 89 percent after the introduction of AI Overviews.[11]

Limitations of AI type thinking

How AI reduces your cognitively abilities such as critical thinking/problem-solving and your learning ability if you use it wrongly

See also: Rational thinking and Critical thinking and Bloom's taxology

Consequences of overly relying on artificial intelligence

Using AI more intelligently

Military applications

AI attached to a drone can make the decision to kill a person on its own, without a person "pulling the trigger". AI can be programmed to recognize tanks and other military equipment and vehicles, and decide in its own which, when, and where to attack.

In the field of covert operations, AI with facial recognition on a drone can hunt down and kill a specific person.

Best uses of artificial intelligence for a typical person

Will Wilson, co-founder of an AI contractor for Palantir says: “It’s very possible that we’re entering a world where very soon any kind of cognitive labor, any kind of reason, any kind of thought… It’ll be a thing that weirdos do.”[12]

The AI bubble

The AI Bubble

Oracle Corporation's experience is salutary. The shares rose 25% when it announced a transaction to provide cloud computing facilities to OpenAI. As of December 2025 the data centers did not exist and will have to be constructed. The transaction requires Oracle, which is significantly leveraged, to borrow funds to create these centers meaning that the firm took significant exposure to Open AI. Given its net debt of over $100 billion will need to increase substantially to finance the data centers, the cost of insuring against Oracle default rose sharply and would flow through into the value of existing debt and the cost of future debt. A credit ratings downgrade from BBB, or low investment grade, is possible, potentially to non-investment or junk grade. Its share price fell to levels around that before the announcement of the OpenAI transaction. While Microsoft, Meta and Amazon.com have stronger balance sheets, the risks are not dissimilar.[13]

AI in pop culture

In more philosophical genres of science fiction such as The Matrix trilogy or Metal Gear Solid 2: Sons of Liberty, artificial intelligence is generally depicted as being in charge of humanity with negative results such as them manipulating humans like chess pieces. This was usually done to promote post-modernism.[14]

History

Many philosophers, including John Searle, have advanced the view that artificial intelligence is an impossible goal. The main argument is that it would be impossible for a machine, a creation of man, to ever achieve actual understanding and comprehension of either language or the world around it, as the machine is simply a set of rules that process symbolic information. This argument is summarized by Searle's thought experiment, the Chinese Room.[15] Despite being a humanist, Searle's arguments reflect the popular religious thought that creation can never be as great as the Creator. Thus, his conclusions are self-evident truth to any theist, as a machine is simply a metal object.

Artificial intelligence was most popular between the 1960's and the 1980's, when computers were still new and misunderstood. Alan Turing was responsible for much of the fever of attempting to create intelligent computers, and with the publication his paper Computing Machinery and Intelligence, created a famous test which would later be known as the Turing test. The more unscientific elements of the AIs, in particular they're eventually evolving to develop a mind indistinguishable from that of a human, had its roots from various 18th and 19th century philosophers, in particular the school of thought known as self-organization which was an update to naturalism.[16]

See also

External links

References

External links