Home Artificial Intelligence 8 Security Issues Raised by the Appearance of ChatGPT

8 Security Issues Raised by the Appearance of ChatGPT

2
8 Security Issues Raised by the Appearance of ChatGPT

(Photo = shutterstock)

Security issues attributable to ChatGPT, akin to writing phishing emails and generating malicious codes, are step by step emerging.

Accordingly, Enterprise Bit, an expert media outlet, conducted a survey of top security experts and introduced the increased security threat aspects with the appearance of ChatGPT on the twenty third (local time).

The 8 typical problems are as follows. Since each expert was asked to indicate the most important problem, some contents are repeated in lots of places.

■ Lowered cybercrime barriers to entry
Traditionally, cybercriminals have required expert skills and costs. Nevertheless, it is claimed that the barrier to entry has been significantly lowered as a consequence of ChatGPT, which might generate malicious code with only text input.

■ Write a convincing phishing email
Phishing email is essentially the most reported case and one in all the most important problems. Additionally it is the predominant specialty of ChatGPT, which is to put in writing elaborate texts which can be indistinguishable from humans.

■ Requires security experts with AI knowledge
Experts identified that the cybersecurity problems attributable to ChatGPT are still of their infancy. He predicted that if a recent level of problem arises in the long run, a security officer with specialized AI knowledge can be required.

■ Monitoring the output of the generating AI
ChatGPT can expose corporate information to the skin without being aware of it. Because of this, increasingly more corporations are banning using ChatGPT. Thorough monitoring of the content you create is important.

■ Increased Attack Amount
With only a single text prompt, akin to “Use Web data to discover people around a particular person and proceed sending messages under that name,” phishing emails can easily be sent in bulk. Experts identified that such a well-known but increasing attack is the issue relatively than a recent threat.

■ Growing Security Proportion
It’s analyzed that corporations will prioritize security when introducing generative AI akin to ChatGPT. As well as, it is predicted that there can be a separate burden akin to setting a usage policy and establishing a process to effectively control it.

■ Increased human burden
Common cybercriminals advance their skills through best practices by each attackers and defenders. But this time the attacker is AI. It’s natural that the burden of defending humans will increase.

■ But nothing has modified
ChatGPT is clearly a strong technology, however the attack method and route follow the prevailing ones. In other words, ChatGPT means that you simply cannot use your email account to send phishing emails, and it doesn’t mean that you simply don’t designate phishing sites. These elements are traditional, he points out, as sufficiently detectable problems.

Reporter Kang Doo-won ainews@aitimes.com

2 COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here