I’m pursuing a master’s in machine learning. The problem at hand is that we want to remove facts that have been learned by machines. We don’t want to necessarily degrade the accuracy.

We just simply want to remove the effect of the data from the model and put it in a state if the data never has been seen by the model.

A few days ago, I was talking with some students. We were discussing whether we have the right to be forgotten in real life.

Because if someone sees you, they remember you, and you have no right to tell them that they have to unlearn you or forget you. Even that one person is a painter, and she or he uses your information to create art—not necessarily directly your face, but your style or your pattern in your face. I don’t know the exact law that is related to this kind of human interaction.

It is hard to draw a line on ownership of data because someone who sends the data and someone who receives the data are both somehow the owners. For example, if someone’s photons hit my eyes and retina and my brain processes them, we can say it is my information right now, like other information I gather from nature; without them, there is no me.

So how do we draw the line because what’s the difference between machine and human, and what if we couldn’t differentiate those in a few years? Maybe after a few years, by combining machines and humans, we will not be able to distinguish between humans and machines.

Currently, the challenge of machine unlearning is understandable, but in general, we don’t have that ability as humans, and maybe machines shouldn’t either. (I’m not sure about this; I’m just thinking loudly, questioning the future of humans and machines, and most importantly, my thesis!!!!!)

– Ali