The proof of the pudding is in the eating, and the quality of an ML Model is in its performance over time.

You may have created the best predictive model ever, but then you need to keep it maintained and monitored over time, or you risk that it degrades and that its performance is no longer up to scratch. You can have all kinds of problems creeping in, such as drift for example. One of the tools that can help you keep your model in shape is the feedback API.

Consider weather forecasts. You have a model that takes readings from meteorological sensors as input and forecasts the weather for the following days or weeks. If the model is good, it will predict correctly whether the day will be sunny or cloudy.

However, after a while, weather conditions in the region shift, maybe because of climate change and global warming: winters are shorter and average temperatures soar. Now your model’s predictions are not very trustworthy anymore.

Notice this: you as a person can look at the sky and check whether the forecast was correct or not, but how do you know when the error goes beyond an acceptable level? This is where the feedback API is your friend.

Using feedback API, you input each time’s weather conditions and compare them with the model predictions and compute all kinds of related metrics. When the error metrics reach a given threshold, an alert is triggered and you know you have to take corrective actions to bring the model back to a shipshape condition.

One of the things you can do at this point is retraining the model; and here again feedback API is your friend. In fact, combining the readings from meteorological sensors with the data coming from the feedback API you always have a fresh stream of ground truth data that can be used for retraining at any time.

Of course a weather prediction model is just an example of the problem and its solution. The same applies to any kind of predictive model. Take for instance a Computer Vision model used for Object Detection: this too is a form of prediction and therefore subject to the same risk of degrading results. Luckily for you, it is also open to the help of the feedback API.

The solution will be more complex and may require, for example, data annotation tools to check and annotate the truthfulness of the predictions. But the basic concept of comparing predictions and ground truth coming (at a later stage) from feedback API, and computing metrics to check model performance – that basic concept remains the same.

Today you can do all this (and more!) directly inside the Radicalbit MLOps Platform. And then you are free to go and taste your pudding!