This article is from a special issue of Science et Avenir n° 199 dated October-November 2019.
basics of artificial intelligence
artificial intelligence: The term designates a certain type of computer program that makes it possible to simulate human abilities such as the perception and recognition of shapes, images, sounds, automatic translation, human/machine dialogue…. They can be implemented on a variety of devices: servers, phones, speakers, search engines or even robots. near‘I’m symbolic’ (see below), little used, is the most widespread AI technology today machine learning. It treats the mass of data thank you for Algorithm. An algorithm is nothing more than a “recipe”, that is to say a series of very precise instructions for the purpose of turning the “ingredients” (raw data) into a defined “dish” (results).
Symbolic AI: The program is based on rules defined by humans. This approach is rarely used today, machine learning It is proving to be more effective in many cases.
Machine Learning or Machine Learning: The program learns on its own, by studying examples, to recognize images and sounds. read or learn attentively, often equated to AI through misuse of language, is only part of machine learning. It relies on a network of tens or hundreds of layers of thousands of artificial neurons inspired by the human brain.
Deep Learning or Deep Learning: This happens in two stages. the first is Learning: To teach an algorithm to recognize, for example, a cat, we give it as input a picture of a cat, which is labeled as such. Each layer of the network corresponds to a particular aspect of the image: ear angle, eye shape, color… If we point to the algorithm that it has misidentified the animal, it adjusts its own weights and thresholds. Changes connections to strengthen or eliminate them, until the correct result is obtained. This operation is repeated with thousands of pictures of cats. program has acquired the notion that what the heck is, then it can Forecast : When presented with an image he has never seen, he knows how to see the animal.
The functioning of digital neurons is inspired by biological neurons: the latter receive electrical signals from the neurons connected to them. In response, they may or may not send a signal across the axon, which connects them to the next neuron. The artificial neuron mimics this behavior with a mathematical function. he connects Input data (X) by assigning a coefficient, weight (P) to each. If this sum exceeds a certain threshold(s), whether the neuron sends a 1 or a 0 in the following. The program keeps changing the weights and thresholds on each connection until the image is correctly recognized.
AI’s great learning technology
supervised learning: The algorithm is trained to perform a task (calculation, prediction, identification of an object, creation of material, etc.) from a labeled database: by hand, the data (images, sounds or others) are associated with those words. who designate them. A long and expensive process. The algorithm is then assumed to be able to recognize similar data but not visible in the training base. read or learn attentively, or deep learning, is a form of supervised learning, capable of processing multiple successive layers of data.
Useless Education: The algorithm learns on its own by “watching” the data without labeling it. He understands for himself how to classify them and according to what criteria, which brings them together and separates them.
reinforcement learning: The algorithm improves when it receives positive (called “rewards”) or negative cues when it has made a good or bad decision, without specifying what it has done well or behaved badly. By collecting the signals, he ends up adopting the desired behavior. This method of trial-and-error progression, closer to human learning, is widely used for game AI. This is how DeepMind’s AlphaGo Zero was trained to play the game of Go.
Learning by Modeling: The algorithm is based on a set of laws, rules, and behavior models to make decisions or result if the context lends itself to the application of the rules.
Incremental Learning: As part of on-board artificial intelligence (where analysis and data storage is done locally on the device and not on a remote server), the algorithm continuously improves its calculations and predictions from permanently updated data.
Generative Antagonist Network or GAN: The category of unsupervised learning algorithms that oppose the two algorithms. The first, called the generator, produces a result that it presents to the other, discriminating it. This gives it a value based on the database from which it was trained. The generator fixes the shot by issuing a new result, and so on, until it produces data that one might believe came from a database. The technique is widely used in “artistic” AI.