Artificial Intelligence has emerged as a double-edged sword in the ever-evolving world of technology. While AI has already led to remarkable advancements in multiple industries, experts are getting increasingly concerned about criminals using the technology to spread menace in the cyberworld.
The absence of strong security measures and safety protocols has paved the way for extensive cyber scams, dissemination of misinformation and an alarming surge in malware. AI can be used in ways that the common man can’t even imagine, but the real question is, how far can criminals go with it?
Fake AI-Generated Images
The absence of strong security measures and safety protocols has paved the way for extensive cyber scams, dissemination of misinformation and an alarming surge in malware. AI can be used in ways that the common man can’t even imagine, but the real question is, how far can criminals go with it?
Fake AI-Generated Images
One of the most disturbing developments is the generation of fake AI images. Despite being completely fabricated, these images have had the capability to generate strong emotional reactions from the social media audience. One of the prime examples of this is the ‘smiling Indian wrestlers’ photo, which gained immense traction on social media, amid the 2023 Indian wrestlers’ protest. The photograph showed the wrestlers being detained in a bus, with wide smiles on their faces. Upon inspection, several glitches were highlighted by experts. The wrestlers were shown with unnaturally perfect teeth and there was an addition of dimples – which in reality both the wrestlers don’t have. However, the fleeting nature of social media consumption results in users overlooking such details while scrolling through their timelines.
Use of AI for Cyber Attacks
Deepfakes have become a significant tool for perpetrating scams across the globe. According to a recent study, India is on the top of the list for victims of AI-powered voice frauds. The study revealed that about 83% of Indians lose their money to such scams. Over the last two years, there have been innumerable incidences of deepfake scams, with the callers impersonating either an individual close to the victim or an authoritative figure. Recently, a 73 year-old man from Kerala, India lost about 500$ to a deepfake scam, after receiving a video call on WhatsApp from someone impersonating his ex-colleague. The caller requested him to transfer money for his sister’s surgery and the victim complied, thus losing 500$ before even realising that he’d been tricked!
The cybersecurity industry has discovered multiple instances where AI has been leveraged to write malicious code for malware. What’s more troublesome is that a considerable number of individuals posting these codes lacked any programming background, underscoring how incredibly easy it is to exploit AI for malicious intent. Upon further research, experts found online forums that are being used to share strategies for evading detection by AI safety teams. Malicious actors are even selling ChatGPT prompts and codes for financial gains.
In conclusion, the immense potential of AI is matched by the dreadful threat it can turn into if it reaches the wrong hands. While cyber criminals find more ways to exploit the capabilities of artificial intelligence for illicit gains, it’s imperative that both technological and legal measures evolve to mitigate these risks.
We, at GDA, have a team of professionals who specialise in cyber security. To know more about our services, head over to our website: www.globedetective.com.