History Of Machine Learning

What is Machine Learning?

Machine learning is the study of computer algorithms that improve automatically through experience. It is seen as a subset of artificial intelligence. Machine learning algorithms build a mathematical model based on sample data, known as "training data", in order to make predictions or decisions without being explicitly programmed to do so. Machine learning algorithms are used in a wide variety of applications, such as email filtering and computer vision, where it is difficult or infeasible to develop conventional algorithms to perform the needed tasks.

Machine learning is closely related to computational statistics, which focuses on making predictions using computers. The study of mathematical optimization delivers methods, theory and application domains to the field of machine learning. Data mining is a related field of study, focusing on exploratory data analysis through unsupervised learning. In its application across business problems, machine learning is also referred to as predictive analytics.

Early History

Finding out the actual early history of Machine Learning is kind of tricky to be honest. Wikipedia in their article on machine learning, claims that "The term machine learning was coined in 1959 by Arthur Samuel, an American IBMer and pioneer in the field of computer gaming and artificial intelligence".

At the same time, in their timeline post they state that the first steps towards machine Learning was in 1763, where The Underpinnings of Bayes' Theorem were placed which presents work which underpins Bayes theorem.

AI in Radiology however in their excellent post however would claim that the first case of neural networks was in 1943, when neurophysiologist Warren McCulloch and mathematician Walter Pitts wrote a paper about neurons, and how they work. They decided to create a model of this using an electrical circuit, and therefore the neural network was born.

On the other hand, Dataversity in their post claims that "Machine Learning is, in part, based on a model of brain cell interaction. The model was created in 1949 by Donald Hebb in a book titled The Organization of Behavior".

Whatever the case, machine learning in the way we think of it today, started at 1950, with the release of  the world-famous Turing Test. This test is fairly simple - for a computer to pass, it has to be able to convince a human that it is a human and not a computer.

From then, it was off to the races. The first machine to play checkers was created at 1952 and almost a full decade later, at 1963 the first machine that could play tic-tac-toe was created.

 

Rise Of The Field

The 1980's and 1990's were an interesting time for the field.  With interest in neural networks picking up again, when John Hopfield suggested creating a network which had bidirectional lines, similar to how neurons actually work. Furthermore, in 1982, Japan announced it was focusing on more advanced neural networks, which incentivised American funding into the area, and thus created more research in the area.

The Machine Learning industry, which included a large number of researchers and technicians, was reorganized into a separate field and struggled for nearly a decade. The industry goal shifted from training for Artificial Intelligence to solving practical problems in terms of providing services. Its focus shifted from the approaches inherited from AI research to methods and tactics used in probability theory and statistics. During this time, the ML industry maintained its focus on neural networks and then flourished in the 1990s. Most of this success was a result of Internet growth, benefiting from the ever-growing availability of digital data and the ability to share its services by way of the Internet.

 

Late History

Since the start of the 21st century, many businesses have realised that machine learning will increase calculation potential. This is why they are researching more heavily in it, in order to stay ahead of the competition.

We begin to see things such as: GoogleBrain a deep neural network created by Jeff Dean of Google, which focused on pattern detection in images and videos. AlexNet that won the ImageNet competition by a large margin in 2012, which led to the use of GPUs and Convolutional Neural Networks in machine learning. DeepFace a Deep Neural Network created by Facebook, which they claimed can recognise people with the same precision as a human can.

Further advances were helped due to how GPUs revolutionized the field, becoming extremely important in the world of machine learning. GPUs have around 200 times more processors per chip than CPUs. The flip side of this, however, is that whereas CPUs can perform any kind of computation, GPUs are tailored to only specific use cases, where operations (addition, multiplicaiton, etc.) have to be performed on vectors, which are essentially lists of numbers. A CPU would perform each operation on each number in the vector syncronously, i.e. one by one. This is slow. A GPU would perform operations on each number in the vector in parallel i.e. at the same time.

Vectors and matrices, which are grids of numbers (or lists of vectors) are essential to machine learning applications, and Nvidia are credited with making the world’s first GPU, the GeForce 256 in 1999. At that time, launching the product was a risk as it was an entirely new kind of product. However, due to the use of vector calculations in video games, GPUs proliferated, as video games benefited from a huge leap in performance. It was years later, than mathematicians, scientists and engineers realised that GPUs could be used to improve the speed of computations used in their discipline, due to the use of vectors. This led to the realization that GPUs would make neural networks, a very old idea, leaps and bounds more practical. This led to GPU companies particularly Nvidia benefiting hugely from the “machine learning revolution”. Nvidia’s stock price has increased roughly 18-fold since 2012, the year in which the importance of GPUs in machine learning was demonstrated by AlexNet.

Relations To Other Fields

Connection to AI: As a scientific endeavor, machine learning grew out of the quest for artificial intelligence. In the early days of AI as an academic discipline, some researchers were interested in having machines learn from data. However, an increasing emphasis on the logical, knowledge-based approach caused a rift between AI and machine learning. Probabilistic systems were plagued by theoretical and practical problems of data acquisition and representation. By 1980, expert systems had come to dominate AI, and statistics was out of favor.

Connection to Data Mining: Machine learning and data mining often employ the same methods and overlap significantly, but while machine learning focuses on prediction, based on known properties learned from the training data, data mining focuses on the discovery of (previously) unknown properties in the data (this is the analysis step of knowledge discovery in databases).

Connection to Optimization: Machine learning also has intimate ties to optimization: many learning problems are formulated as minimization of some loss function on a training set of examples. Loss functions express the discrepancy between the predictions of the model being trained and the actual problem instances (for example, in classification, one wants to assign a label to instances, and models are trained to correctly predict the pre-assigned labels of a set of examples).

Needles to say there are more fields that may often intertwine or work well with machine learning, such as statistics or even economics.

 

Final Thoughts

This article follows up on our previous discussions on automation and digitization, in order to discuss about machine learning, something that at first, I thought would be a combination of the two. With machine learning being brought up more and more every day, it is interesting to see where things begun. I for one, enjoyed the journey through memory lane.

Share this