In a bid to make their algorithms safer and more responsible with user data, Google has announced a competition called the Machine Unlearning Challenge. Through this, the AI giant aims to find new ways to remove sensitive data from neural networks to adhere to global data regulation norms.
Removing training data from a trained neural network has long been a difficult problem to solve in AI. However, the emergent field of machine unlearning aims to solve this problem. Machine unlearning aims to do so by either retraining an algorithm with excluded data points or making adjustments to already-trained models.
The competition will be hosted on Kaggle, and will be a part of the NeurIPS 2023 Competition Track. According to Google, the competition will provide participants with a realistic scenario wherein a certain subset must be forgotten in a pre-trained model. The results will be evaluated using a membership inference attack, which can determine certain elements of the training dataset. If the attack fails, the model will be considered to have passed the test.
Participants will be scored in terms of forgetting quality and model utility after forgetting. It will run between mid-July and mid-September, with Google announcing that it will release a starting kit for participants to test their unlearning models.
The crux of the problem is that deleting the training data doesn’t delete the influence of the data on ML models. Even if the data is removed from the database, the models trained on it might still have information on the data. With machine unlearning techniques, researchers can remove even the data’s effect, making it compliant with data deletion requests.
The post Google Announces The First Machine Unlearning Challenge appeared first on Analytics India Magazine.