Blog 14

Milestones in artificial intelligence
Gizem Baruk I 22.06.2022

Artificial intelligence has been with us since the 1950s and is one of the most significant inventions of humanity.The fields of robotics, as well as processor and storage technologies, have experienced a particularly increased importance in the economy and society today. Nevertheless, scientists have been researching and working onDecades of work on the development of artificial intelligence with the goal of developing machines that think and learn like humans to support them in their everyday lives.

Here is an overview of the most significant milestones in AI technology:
1950 Turing-Test
The British mathematician Alan Turing proved that his self-developed computing machine, the "Turing machine", would be able to carry out cognitive processes if these could be broken down into several individual steps and represented by an algorithm. In doing so, he laid the foundation for what we understand today as artificial intelligence.

1956 Origin of the term “artificial intelligence”
In 1956, numerous scientists met for a conference at Dartmouth College in the US state of New Hampshire. They were of the opinion that aspects of learning and other characteristics of human intelligence can be simulated by machines. The programmer John McCarthy proposed the term "artificial intelligence" for this. During the conference, the world's first AI program, called "Logic Theorist", was written, which was able to prove several dozen mathematical theorems.

1966 Eliza, the first chatbot
Joseph Weizenbaum, a German-American computer scientist, developed a computer program called "ELIZA" that was intended to show how computers could enter into a dialogue with people using "natural language". The best-known form of "ELIZA" simulated psychotherapy in which the computer searched statements typed in by people for keywords and played them back in a modified form. The program was very successful because it was pre-programmed according to certain logics.

1972 AI enters medicine
Artificial intelligence called "MYCIN" is finding its way into practice: The expert system developed by Ted Shortliffe at Stanford University is used to treat diseases. It should support the diagnosis and treatment of infectious diseases. MYCIN analyzes numerous parameters to identify pathogens and the best antibiotics for therecommended to patients.

1986 “NETtalk” speaks
Terrence J. Sejnowski and Charles Rosenberg taught their program "NETtalk" to speak by entering example sentences and phoneme chains. Their program "NETtalk" could read words and pronounce them correctly, as well as apply what it had learned to words it did not know. This made it one of the first artificial neural networks that resembled the human brain in its structure and function.

1997 Computer beats world chess champion
In 1997, IBM's AI chess machine "Deep Blue" beat the reigning world chess champion Garry Kasparov in a chess tournament. This was considered a historic success for the machines in an area that had previously been dominated by humans. Unlike today's systems, Deep Blue did not learn the game, but beat its human opponent through its skillful and fast computing power.

2011AI “Watson” wins quiz show
In a US TV quiz show, the computer program “Watson” competed in the form of an animated screen symbol and won against its human competitors. “Watson” thus proved that it understood natural language and could quickly answer difficult questions.

2014 Development of GANs
Ian Goodfellow, an American computer scientist, introduces the technology GANs (Generative Adversarial Networks) for the first time. These are a group of unsupervised learning algorithms that can be used for creative applications. They are also used to create photorealistic images, model movement patterns in videos, create 3D models of objects from 2D images and process astronomical images.

AI reaches everyday life
Artificial intelligence also made its way into people's daily lives through the further development of hardware and software systems. Powerful processors and graphics cards in computers, smartphones and tablets made it possible to access AI programs. Voice assistants in particular became increasingly popular at this time: Apple's "Siri" came onto the market in 2011, Microsoft introduced the "Cortana" software in 2014 and Amazon Echo presented the "Alexa" voice service in 2015.

2016 ALPHAGO cracks GO
Because of its complexity, the game GO was considered unsolvable for artificial intelligence for a long time. Google's "AlphaGo" learned the game using reinforcement learning and in 2011 competed against one of the best players, beating him 4:1.

2018AI debates space travel and arranges a hairdressing appointment
In June 2018, the IBM-developed program “Project Debater” took part in a debate on complex topics with two debating masters – and performed remarkably well.
Previously, Google demonstrated at a developer conference how the AI “Duplex” called a hairdresser and arranged a hairdressing appointment in a conversational tone - without thethe other person on the line realized that he was talking to a machine.

Why is AI only now so present in our everyday lives?
Due to commercialization, artificial intelligence left the research laboratories and entered our everyday lives. The main reasons for this are, on the one hand, more powerful computer systems and AI processes and, on the other hand, the large amount of digital data, such as images and documents, that can now be stored and collected everywhere. This possibility allows algorithms to be successfully trained and numerous processes to be automated in a wide variety of industries and companies.




Share by: