Clarification of concepts

Artificial intelligence

Artificial intelligence is any artificial creation of human-like intelligence programmed to perform equally or better than humans on specific tasks. Popular AI applications today include speech recognition, translation, text generation, computer vision, and recommender engines.

Nowadays, in a non-technical context, when we talk about AI, we are generally referring to Deep Learning applications due to the popularity and attention that DL has received in the last couple of years. However, it is important to remember that DL is only one of many AI methods and techniques.

Machine Learning

Classical Machine Learning (ML) is a sub-field of Artificial Intelligence that allows computers to learn structures and patterns in the data without explicit instructions. Machine Learning always starts with training data such as text and images fed to an algorithm for training. The algorithm is programmed to fit the data as best as it can, and in so doing, it identifies (“learns”) patterns in the data, which it will then use to categorise and label new data. The quality of predictions of an ML model depends, among others, on the quality and quantity of the data it had at its disposal during training.

Deep Learning

Deep Machine Learning is a subset of Machine Learning that uses neural networks, a popular family of models, to describe the data. It is called “deep” because it uses deep networks, i.e., made of many nodes and layers. This complexity helps Deep Learning (DL) algorithms make very accurate predictions that rely less on human input than other machine learning approaches. This accuracy comes at a cost: Deep Learning models require a lot of data and energy to learn effectively. Moreover, it is crucial for the data fed to the algorithm to be of high quality, representative of the domain, and free of biases and discriminatory elements to avoid these biases being embedded in the model’s predictions.

While the concept of deep learning has been around for several decades, it was only since the mid-2010s that it became the dominant approach in AI, thanks to advancements in computing power and the availability of large datasets. Typical applications that rely on DL algorithms are image recognition (e.g., Google Lens), translation apps, automatic dictation, text generation (e.g., ChatGPT), and self-driving cars.

Large Language Models (LLMs)

Large Language models (LLMs)are deep learning algorithms that deal with textual data. Depending on the application and model, they can generate, recognise, modify, summarise, translate text, and more. Some popular applications of LLMs include:

  • Chatbots and AI assistants
  • Search engines’ summarised answers at the top of search results
  • Text autocompletion (e.g., for emails)
  • Text classification (e.g., spam detection)
  • Support in writing any text, software, and code
  • Translation engines

LLMs are probabilistic, meaning they create a probability map of words and sequences of words based on large bodies of training data harvested from web pages, e-books, e-journals, etc. In the case of generative models such as ChatGPT, the word choice is based on conditional probability. The model generates the word most likely to follow previous sentences and paragraphs.

Learning Analytics

Learning analytics is the measurement, collection, and analysis of learning-related data to understand and improve the learning and educational process in a given domain. Examples of learning analytics include monitoring students’ activity on online learning platforms, providing personalised assignments and tasks based on test results, monitoring progress and giving personalised feedback, and using collected data to improve course curriculum.