Robustness
Robustness is a measure of a model's accuracy when presented with novel data.
Robustness is a measure of a model's ability to produce accurate results when encountering novel data. Most ML and AI models are initially trained on a set of data that is a representative sample of the data it will encounter in production. However, it’s near impossible for training data to fully represent the full complexity of a real world distribution of data - robust models can generalize from specific training data to more general datasets, and handle data drifts and adversarial inputs gracefully.
A common critique of modern machine learning is its tendency to produce fragile models that only work as expected when it encounters data very similar to the type of data it trained on. These models are often said to be overfit. While these types of models look good in testing, in production they often fail in unexpected ways.
There are a number of approaches that are being developed for improving robustness:
→
Better interpretability and understanding of black box AI techniques can, in general, help modelers improve robustness. Agent-based models are traditionally more robust than other methods because the agents have set, interpretable behaviors, which allows them to be quickly improved and corrected.→
Domain randomization within [multi-agent systems] and synthetic environments can help create in models which perform better under a wider array of conditions, including those which may occur in the real-world but not have been present in historical data used to train models in the first place.
Create a free
account