The Sillycon Post

Exploring the dark side of Machine Learning and Artificial Intelligence

Published on 11/07/20205 min read
image
Image Credit: https://i2.wp.com/www.PartisanIssues.com

Nowadays, most of the budding computer engineers are going into the field of AI and ML. This is because these fields form the basis of the future of mankind. Almost everything we see around us is going to evolve in future or has already been evolved using AI and ML. For example, manually driven cars will evolve to self-driving cars in near future, a great amount of progress has already been made in self-driving cars. Even our choices and tastes are influenced by AI and ML, like in case of online shopping, we are shown the type of products which we have previously searched for. AI has already influenced our lifestyle and I won’t be shocked if AI “dominates” our lifestyle in future.



I completed some Machine Learning courses a few weeks back and therefore I started looking online for some projects. People usually start with very common projects like “gender classification”, “object recognizer” etc. I wanted to do a very unique project and therefore, I started exploring intensely. Finally, I found something which was quite different and also “dangerous” if not used wisely. It was about predicting passwords just by using the sounds produced by keys on the keyboard (spooky, right!?). The person who originally did the project trained a model with the spectrogram images of the sounds produced by the keys. The data that was used consisted of actual passwords of people that were leaked a few years back by some hacker. When the model was tested, the test accuracy that was obtained was approximately 8%, which seems to be low, but put it another way and think that you can accurately guess passwords of 8 people for every 100 people, which I would say is pretty scary.


What’s even more scarier is the fact that majority of the websites we visit or the apps we install ask for audio/microphone permission and we even permit them. Now think, what if they have trained a model just like this on their app, or worst, maybe an even more advanced version of such models as tech giants have much more resources to build a better model. How can we know for sure that our data that includes sensitive information is safe? Even though such applications and websites deny this fact, we can never be so sure, at least I can’t be so sure, and believe me, I am saying this from my personal experience. What happened was that one day, my friends and I were having a discussion about a laptop that my friend had just bought. A few minutes later, I opened a famous video streaming platform (not going to take name) and guess what!?, my feed was showing videos all about that particular laptop, the exact same model! It was unbelievable because I use that application almost everyday and I had never seen a video about that particular laptop.


This was just one small example which I came across. Just imagine how many more such “devious lines of code” might exist out there and we don’t even know about them. I agree with the fact that AI and ML have played a major role in the revolution of healthcare, education and many more, but the point is that “do we know what cost are we paying for it?” “Does all this development leads to something bigger?” “ Are we just pawns in the game of chess of people who want to dominate the future using technology?”


Think about it and let me know your opinion in the comment section. Also share if you had a similar experience with Artificial Intelligence and Machine Learning.


Quote of the blog:

“The development of full artificial intelligence could spell the end of the human race….It would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

Stephen Hawking