At the start of the first keynote, the presenter asked the question: “Is anyone familiar with machine learning?”
Over the course of half an hour, the speaker was able to give us a summary worth of an entire AI bachelor study. During these fruitful 30 minutes, the speaker explained the difference between traditional programming and machine learning: In traditional programming, the coder determines what his code does based on the input, while in machine learning the input and output is simultaneously “fed” to the system in order to classify or predict an answer.
This may still sound a bit vague, so let me illustrate this with an example of Machine Learning. In the case of the system recognizing digits, images of handwritten digits are fed into an AI (Artificial Intelligence) network that will learn the features of these digits over time. Every time a request is made, the system will give the digit with the highest probability of being in the image (based on the learned features) as output. In case the output is wrong, the programmer can clarify to the system where the error was made to adjust it to become more accurate in its next rehearsal. This process is called “back propagation,” and is a form of supervised learning in which the programmer can tell the system what it’s doing wrong and what it’s doing right.
Furthermore, the speaker also explained why machine learning – which already came up as a subfield of computer science during the late 1950s – recently surged back into popularity. Nowadays, we have big data, cheaper computing power, and better algorithms. This combination revived machine learning and gave it useful applications, such as fraud detection and facial recognition systems. Moreover, machine learning is also being used for self-driving cars that operate autonomously by using neural networks.