Vitaly Kamluk, APAC director of the Global Research & Analysis Team at Kaspersky, a global cybersecurity company, talks about the proactive measures that businesses and individuals can take to ensure a more secure digital future.
While artificial intelligence (AI) empowers cyber attackers, it can also be effectively utilised to empower users in navigating the challenges of disinformation. Việt Nam News spoke with Vitaly Kamluk, APAC director of the Global Research & Analysis Team at Kaspersky, a global cybersecurity company, about the proactive measures that businesses and individuals can take to ensure a more secure digital future.
Inner Sanctum: What are the ways in which AI empowers cyber attackers?
AI has increasingly become a valuable tool for cyber attackers, allowing them to carry out more sophisticated and damaging attacks.
From aiding the development of malware to automating large-scale attacks, AI has the potential to significantly enhance the effectiveness and reach of cybercriminals.
One major way AI empowers cyber attackers is through its use in advanced persistent threat (APT) cyber attacks.
AI can analyse vast amounts of data and identify vulnerabilities, enabling attackers to exploit weaknesses in targeted systems.
AI can also play a role in the development of malware by automating tasks like purchasing network infrastructure and compromising accounts, making it harder for antivirus software and spam filters to detect threats.
AI can also be used to manipulate or create fake data, leading to confusion or impersonation of officials. For instance, deepfake scams utilise AI to create realistic fake videos or audio recordings, which can be used for identity theft or blackmail purposes.
Inner Sanctum: Can you talk about the risks and flaws of AI which have been discovered in relation to cybersecurity?
One significant concern is the emergence of AI-powered password cracking.
Cyber attackers can use machine learning and AI algorithms to analyse large data sets of passwords, generating variations and improving their ability to guess users’ passwords, which poses a significant threat as weak or easily guessed passwords can be easily compromised.
AI-assisted hacking has also become a growing concern. With AI algorithms, cybercriminals can automate and enhance various hacking activities, including vulnerability scanning, system weaknesses detection, and adaptive malware development.
By incorporating AI in these activities, hackers can execute attacks more precisely and swiftly, making it even more challenging for defenders to mitigate threats.
In addition, there is a risk of AI being used in supply chain attacks. Cybercriminals can insert malicious code or components into legitimate software or hardware products, compromising an organisation’s cybersecurity defences.
These attacks can go unnoticed for extended periods, enabling malicious actors to access sensitive information or disrupt critical systems.
Perhaps the most concerning prospect is the creation of an intelligent malicious system that is fully independent and autonomous. Such a system could imitate known threat actors, automatically launch attacks, and keep defenders occupied with false noise.
Inner Sanctum: How can AI be effectively utilised to empower users in navigating the challenges of disinformation?
First and foremost, accessibility is crucial. We must limit anonymous access to intelligent systems built and trained on rich data volumes.
By storing the resulting content history and identifying how synthesised content is created, we can establish transparency and accountability in the AI ecosystem.
Policy also plays a pivotal role in leveraging AI for the benefit of users. The European Union’s discussions on tagging AI-assisted content are a step in the right direction.
By providing users with a quick and reliable way to detect AI-generated images, sounds, videos, or text, we can enhance their ability to discern the authenticity of information.
Violators must face punishment, ensuring a safer digital environment for users.
Education is key to empowering individuals to navigate the challenges of disinformation.
By creating awareness and teaching people how to detect artificial content, validate information, and report possible abuse, we can equip them with the necessary skills to make informed decisions.
Schools should include lessons on AI, explaining its concept, the differences to natural intelligence, and the reliability or potential shortcomings of AI systems.
Inner Sanctum: What proactive measures can businesses and individuals implement to adapt to the progress of AI while ensuring a more secure future for all?
First, businesses should invest in staying future-focused with their technology. Outdated technology and redundant manual labour can leave businesses vulnerable to exploitation.
Staying ahead of the curve and using forward-thinking solutions, such as Kaspersky Integrated Endpoint Security, can help businesses adapt and protect themselves from potential vulnerabilities.
Instead of replacing human workers with AI, businesses should augment their teams with AI and machine learning capabilities.
It is crucial for businesses to ensure that their IT teams are trained to work with and support this infrastructure, utilising AI as a tool rather than solely relying on it.
Businesses and individuals should also routinely update their data policies to comply with evolving legislation.
Data privacy has become a focal point for governing bodies across the globe, and it will continue to be a significant concern for most enterprises and organisations in the future. — VNS