GPT chat and AI are already threats to business cybersecurityGPT chat and AI are already threats to business cybersecurity

Artificial Intelligence will be the cause of a number of new attacks. On the other hand, companies should use AI itself to improve cybersecurity.

On March 29 this year, more than a thousand researchers, experts and technology executives, including Tesla CEO Elon Musk and Apple co-founder Steve Wozniak, called in an open letter for a six-month pause in the development of new artificial intelligence (AI) tools, calling the current situation a “dangerous” arms race. The context presented by the brightest minds in technology exposes a dichotomy around one of the most talked about topics at the moment: if on the one hand it is celebrated that AI will bring clear benefits to the development of the economy and society, on the other hand, it is inevitable that it has the potential to put in check the most fundamental bases of these systems.

For the Security Design Lab (SDL), a global cybersecurity research and development network operating in South America and Europe, Artificial Intelligence and Cybersecurity are following paths of integration and coexistence between the two will be inevitable. An example is Microsoft, a partner of OpenAI, the company that created ChatGPT: while it announced the injection of billions of dollars in the evolution of the tool, on the other hand, it will also double its investment in research and development in cybersecurity, to figures around $ 20 billion.

What risks could Artificial Intelligence bring to the security of companies?

One of the biggest challenges is hackers using AI to develop more sophisticated cyber threats, such as realistic phishing emails, deploying malware or creating convincing deepfake videos. At a recent Dell event in Las Vegas, one of the biggest scammers in US history, Frank Abagnale Jr, who had his life portrayed in the movie “Arrest Me If You Can”, was very clear: “What I did 40 years ago, today is much easier to do because of the technologies that exist. Not only from the point of view of technology, but especially people. Social engineering is a reality and it is not going to change. Cybercriminals will use Artificial Intelligence. It will be evil, “said the expert, who became an FBI consultant and speaker on information security.

“ChatGPT, for example, could easily be used by a hacker to generate a customized spear phishing message based on a company’s marketing materials and common everyday emails. It can fool people who have been well trained in recognizing fraudulent messages, because it will not look like the messages they have been trained to detect,” warns Alexandre Vasconcelos, director for Latin America at Security Design Lab.

Christiano Sobral, managing partner and digital law specialist at Urbano Vitalino Advogados, says cybersecurity experts have recently described the dangerous potential of ChatGPT and its ability to create polymorphic malware that is almost impossible to identify using endpoint detection and response (EDR). “EDR is a type of cybersecurity technique that can be deployed to catch malicious software. However, experts suggest that this traditional protocol is no match for the potential damage that ChatGPT can create. Code that can mutate – this is where the term polymorphic comes in – as it is much harder to detect. Most language learning models, such as ChatGPT, are designed with filters to prevent the generation of inappropriate content, as deemed by their creators. This can range from specific topics to, in this case, malicious code. However, it didn’t take long for users to find ways to circumvent these filters. It is this tactic that makes ChatGPT particularly vulnerable to individuals looking to create harmful scripts,” he explains.

How can Artificial Intelligence help the evolution of cybersecurity tools?

If the ability to identify cyber threats has become and will become increasingly difficult, when integrated with learning algorithms that use Artificial Intelligence, security systems could bring significant advantages to cybersecurity. “For example, identifying the variants of a malware can be challenging, especially when huge volumes of code mask it. However, dealing with this cyber threat will be easier, when security solutions use Artificial Intelligence mechanisms, which include databases of existing malware and the ability to detect patterns to discover new malicious codes “, points out Cristiano Iop, founder of Sikur, a cybersecurity solutions manufacturer.

With the use of Artificial Intelligence, the executive argues that cybersecurity systems will have the ability to recognize patterns, which means that these systems can detect deviations from standards, responding appropriately to threats in the IT (Information Technology), OT (Operational Technology) and SCI (Industrial Control System) ecosystems.

The path of regulation in Brazil and worldwide

In Brazil, the Senate is analyzing a bill to regulate artificial intelligence systems. It is based on the work of a commission of jurists that analyzed, throughout 2022, other proposals related to the subject, in addition to the legislation already existing in other countries. The text creates rules for intelligence systems to be made available in Brazil, establishing the rights of people affected by their operation and defines criteria for use by the public authorities. Violations of the rules could lead to fines of up to BRL 50 million per infraction or up to 2% of turnover in the case of companies. Other possible punishments are a ban on participating in experimental regulatory environments and temporary or permanent suspension of the system.

Abroad, the UK – third in the world in AI publications, has published a White Paper on its approach to regulation. According to Marcello Junqueira, partner at Urbano Vitalino Advogados and specialist in digital law, Prime Minister Rishi Sunak announced that the country will host a global summit on the regulation of Artificial Intelligence between September and December 2023.

“The material prepared in the UK proposes a set of overarching principles of regulation that addresses safety, security and robustness, transparency and explainability, fairness, accountability and governance, and contestability and redress. The content includes an annex on the country’s intentions to promote the development of international AI standards and points to existing forums for multilateral engagement on the topic, such as the OECD Artificial Intelligence Governance Working Group, G7 and Council of Europe Committee on Artificial Intelligence – the latter of which is currently developing a human rights treaty “, explains Junqueira.

https://defconpress.com/pressbrasil/chat-gpt-e-ia-ja-sao-ameacas-a-ciberseguranca-de-empresas/

By admin