Jan 29, 2024 | by A. Conflitti, Technology

The proof of the pudding is in the eating, and the quality of an ML Model is in its performance over time.

You may have created the best predictive model ever, but then you need to keep it maintained and monitored over time, or you risk that it degrades and that its performance is no longer up to scratch. You can have all kinds of problems creeping in, such as drift for example. One of the tools that can help you keep your model in shape is the feedback API.

Consider weather forecasts. You have a model that takes readings from meteorological sensors as input and forecasts the weather for the following days or weeks. If the model is good, it will predict correctly whether the day will be sunny or cloudy.

However, after a while, weather conditions in the region shift, maybe because of climate change and global warming: winters are shorter and average temperatures soar. Now your model’s predictions are not very trustworthy anymore.

Notice this: you as a person can look at the sky and check whether the forecast was correct or not, but how do you know when the error goes beyond an acceptable level? This is where the feedback API is your friend.

Using feedback API, you input each time’s weather conditions and compare them with the model predictions and compute all kinds of related metrics. When the error metrics reach a given threshold, an alert is triggered and you know you have to take corrective actions to bring the model back to a shipshape condition.

One of the things you can do at this point is retraining the model; and here again feedback API is your friend. In fact, combining the readings from meteorological sensors with the data coming from the feedback API you always have a fresh stream of ground truth data that can be used for retraining at any time.

Of course a weather prediction model is just an example of the problem and its solution. The same applies to any kind of predictive model. Take for instance a Computer Vision model used for Object Detection: this too is a form of prediction and therefore subject to the same risk of degrading results. Luckily for you, it is also open to the help of the feedback API.

The solution will be more complex and may require, for example, data annotation tools to check and annotate the truthfulness of the predictions. But the basic concept of comparing predictions and ground truth coming (at a later stage) from feedback API, and computing metrics to check model performance – that basic concept remains the same.

Today you can do all this (and more!) directly inside the Radicalbit MLOps Platform. And then you are free to go and taste your pudding!

Don't miss any update

Sign up now to the Radicalbit newsletter and get the latest updates in your inbox.

Ensuring AI Compliance & Optimizing Performance with Observability

Ensuring AI Compliance & Optimizing Performance with Observability

What if, just a few years ago, someone told you that companies would have to start deeply monitoring their AI artefacts to meet regulatory requirements? Generative AI and its surrounding technologies, which emerged last year, are driving the widespread adoption of AI...

Radicalbit @ Big Data Conference Europe ‘23

Radicalbit @ Big Data Conference Europe ‘23

The days between November 21 and 24 were truly exciting as we had the honour of presenting a live talk at the Big Data Conference Europe 2023 in Vilnius, Lithuania. This experience wasn’t just a deep dive into cutting-edge topics but also an immersion into Lithuanian...

Radicalbit joins Big Data Conference Europe 2023!

Radicalbit joins Big Data Conference Europe 2023!

We are thrilled to share the exciting news that Radicalbit will be participating as a speaker at the upcoming Big Data Conference taking place in Vilnius from November 22nd to 24th! Our Senior Data Scientist, Mauro Mariniello, will be taking the stage on November 23rd...