ChatGPT-Made Malware Could Be A Problem For Antimalware Solutions
ChatGPT-Made Malware Could Be A Problem For Antimalware Solutions
AI can do a lot of things, and that includes bad things. While OpenAI, makers of ChatGPT, do their best to keep the chatbot in check with virtual barriers, circumventing the chatbot isn’t exactly the hardest thing to do. The AI is capable, and especially good at, designing a type of malware called polymorphic malware – which can be extremely difficult for antivirus systems to catch.
EDR, or endpoint detection and response, is a technique that checks “endpoints” – meaning client devices like PCs, laptops, phones, IoT devices – and determines if there are any suspicious activity performed by a rogue software or malware. However, the “polymorphic” part of the malware is the problem: it morphs itself into different states which can be hidden, making detection difficult.
Cybersecurity engineers has already proven the proof-of-concept of this family of malware, including a polymorphic keylogger – a type of malware that logs your key presses to grab password and sensitive data – just with the extra “morphing” part that makes it difficult to get caught by antimalware systems. Another example is using such technique to perform code injection, a common type of malware attacks, using code generated by ChatGPT (CyberArk documented the entire process of tricking the chatbot into generating malicious code).
What’s next for cybersecurity then? Essentially, you’ll have to fight fire with fire – as antimalware solution has increasingly rely on cloud-based detection in recent years, it’s only natural the next iteration of malware detection will add artificial intelligence into the mix.
Source: Tom’s Hardware
Pokdepinion: That being said, always practice good cybersecurity habits. Antimalware solutions should NOT be your first line of defense – they should be the last one in case you get caught out.