Sidhak Verma
Myself Sidhak I am a student and a content writer. I share my ideas on social media and finding ways of earning money online on the internet.
Artificial intelligence (AI) is one of the most interesting and disruptive technologies of the contemporary era. Its development has had a profound...
Image Credits: pixabay
Artificial intelligence (AI) is one of the most interesting and disruptive technologies of the contemporary era. Its development has had a profound impact on industries, society, and how we interact with the world. But when was Artificial Intelligence created? While the concept of artificial intelligence has ancient roots in mythology and philosophy, its current form has a more recent history.
The concept of creating machines that can replicate human intelligence is not new. Ancient mythologies, such as the Greek mythology of Talos, and the works of thinkers such as Aristotle have depicted artificial beings capable of thinking and reasoning. However, these were more speculative concepts, and the groundwork for genuine AI would not be established until much later.
The term “artificial intelligence” originated in the 1950s. Here are some major milestones in AI development:
Alan Turing, a British mathematician, proposed the Turing Test in his groundbreaking work “Computing Machinery and Intelligence” published in 1950. This test aims to assess a machine’s capacity to display intelligent behaviour that is indistinguishable from that of a person. While the Turing Test is not an example of artificial intelligence, it established the framework for future research into machine intelligence.
American computer scientist John McCarthy invented the term “Artificial Intelligence” when he organized the first AI conference at Dartmouth College in 1956. This event signalled the start of AI as an academic discipline. It brought together leading thinkers including McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon, all of whom believed that machines could imitate human intelligence.
Following the Dartmouth Conference, researchers began creating early AI programs. Allen Newell and Herbert A. Simon invented Logic Theorist (1955), which could establish mathematical theorems, while Joseph Weizenbaum developed ELIZA (1966), a natural language processing tool. These early programs revealed that machines could do activities previously thought to require human intelligence.
Despite early confidence, progress in AI slowed in the 1970s and 1980s, a period known as the “AI Winter.” Due to restricted computer capacity, a lack of funding, and unfulfilled promises, many researchers began to mistrust AI’s potential. However, the work done during this period set the way for future advancements.
AI regained popularity in the 1990s and 2000s. Computer hardware advancements, improved data availability, and new algorithms enabled important achievements.
1. Machine Learning (1990s-Present): One of the most major developments in AI was the creation of machine learning, which allows systems to learn from data rather than being manually programmed. This resulted in the rise of deep learning, which replicates the neural networks of the human brain and supports today’s AI applications, including self-driving cars and virtual assistants.
2. AI in Gaming (1997): In 1997, IBM’s Deep Blue defeated global chess champion Garry Kasparov, marking an important moment in AI research. Deep Blue’s achievement demonstrated the power of AI in complicated problem-solving tasks.
3. AI in the 21st Century (2000s-Present): The twenty-first century has seen AI create a wide range of businesses. AI is becoming more prevalent in everyday life, with virtual assistants such as Apple’s Siri and Amazon’s Alexa, as well as machine learning systems powering recommendation engines and driverless vehicles. The rise of huge data and increasingly powerful processing has accelerated the growth of AI, which now has applications in healthcare, finance, and entertainment.
While artificial intelligence as we know it was formally developed in the 1950s, its origins go far further back. Decades of research and testing have affected the growth of artificial intelligence, from early theoretical concepts to the powerful technology we use today. As AI advances, it becomes evident that the road to its creation is far from complete, with fresh innovations and breakthroughs emerging regularly. The future of AI offers even greater breakthroughs that could completely change how we live, work, and interact with the world.