nCa Report
AI (Artificial Intelligence), despite being quite familiar to the regular users of the Internet, is still a kind of a blob when it comes to its definitive use in cybersecurity.
The problem here is that there is the real potential for pitching AI against AI. —– If AI is integrated into the cybersecurity solutions to prevent phishing, what is there to prevent hackers to bring AI into their arsenals to defeat the system?
Kazakhstan is among the countries that are giving serious thought to the scenario of good-AI vs. bad-AI.
The State Technical Service of Kazakhstan, billed on their website as ‘THE TECHNOLOGICAL CORE OF KAZAKHSTAN’S CYBERSHIELD,’ is working on the integration of AI into the cybersecurity system.
JSC State Technical Service
https://sts.kz/en/
The STS (State Technical Service) has studied the recent case of a phishing spree and issued advisory to the net users:
How Data Leaks and Phishing Attacks Threaten SME Information Security
kazinform had a conversation with Ulukbek Shambulov, the deputy chairman of STS on the possible use of the AI models to protect against the phishing attacks.
Here is the transcript:
– What opportunities does artificial intelligence open up in cybersecurity?
“Firstly, AI allows you to automate threat detection and response in real time, which significantly speeds up the protection process. Second, AI can analyze large amounts of data to identify patterns and anomalies that may indicate potential threats. This helps in proactive defense, not just reacting to incidents that have already occurred. AI is also capable of learning from new data, which allows cybersecurity systems to adapt to new types of threats.
– Can AI improve work in this area?
– Yes, AI can significantly improve cybersecurity work. Using AI can improve the accuracy of threat detection, reduce the number of false positives, and automate routine tasks. This frees up time for cybersecurity professionals, allowing them to focus on more complex and critical tasks. In addition, AI can offer new approaches to threat analysis and protection, making systems more flexible and adaptive.
– Tell us more about the mechanism of AI in the field of cybersecurity.
– AI in cybersecurity works based on machine learning and data analysis algorithms. AI systems are first trained on historical data, including information about previous attacks and threats. Based on this learning, the AI creates models that are able to identify suspicious behavior or anomalies in network traffic. AI then uses these models to monitor current data and activity, identifying potential threats and automatically responding to them, such as blocking suspicious traffic or sending alerts to specialists.
– Are there any risks when using AI in your work?
– Yes, there are risks when using AI in cybersecurity. One of the main risks is that AI can become a “black box”, where decisions are made based on algorithms that are difficult to explain or understand. This makes systems difficult to debug and improve, and reduces confidence in their solutions. For example, when using models such as GPT, there may be cases of so-called “hallucinations”, where the AI generates incorrect or inaccurate data. In this regard, it is recommended to double-check the actions of the AI and not rely on them 100%, as errors are possible.
In addition, there is a risk that attackers may try to fool AI systems using new attack methods. Therefore, it is important to regularly update AI models and ensure they are able to adapt to new threats.
– Are there systems to protect against unforeseen circumstances when working with AI?
– Yes, there are various systems and practices to protect against unforeseen circumstances when working with AI. This includes implementing monitoring and auditing mechanisms to monitor the performance of AI systems and identify potential failures. Testing and verification methods are also used to ensure that AI systems operate correctly and safely. An important aspect is to regularly update and train models to ensure they are able to respond to new threats and changing conditions.
Let us recall that this year in Kazakhstan they developed a concept for the development of artificial intelligence for 2024-2029. In addition, the state is developing a bill to regulate artificial intelligence in the country.
AI is already being used in government work. For example, Vice Minister of Energy of the Republic of Kazakhstan Ilyas Bakytzhan said at a briefing at the Central Communications Service that today elements of artificial intelligence are already being used in certain areas of the department.
It was previously reported that the world’s first law defining the rules for the use of systems based on artificial intelligence came into force in the EU. Its goal is to reduce the risks associated with AI. The European Commission is creating a body to oversee compliance with the law. Impressive fines are provided for violators.
* * *
The countries in Central Asia, including Kazakhstan, are not just trying to master the fast developments in AI and machine learning but also finding their practical uses.
An important area to always safeguard against would be the looming scenario of good-AI vs. bad-AI. /// nCa, 20 August 2024