Al creates new malware, which cannot be revealed by AV software
The experiment involves Elon Musk’s OpenAI framework.
DEF CON Machine-learning tools improve their skills and have ability to create their own malware that overcomes antivirus software.
In a major presentation at the DEF CON hacking convention Hyrum Anderson, technical manager of data science at security shop Endgame, demonstrated the company research involving Elon Musk’s OpenAI framework adaptation to the aim of developing malware that cannot be revealed by security-protection modules.
The system mainly practices in tweakage of malicious binary files, so they can remain unnoticed and keep working once unpacked and executed. He also said AV engines (even one with artificial intelligence) can be fooled when slightly changing usual combination of bytes.
Anderson also admitted that all machine learning models have their security flaws. Inventive hackers can easily use these models for their own benefit.
By making slight changes to malware, the team created a rather simple scheme to develop powerful malicious code. They kept an eye on the reply from the engine, as a result, they could make many effective tweaks, due to which malicious software could wriggle out of security sensors.
More than 15 hours of training, 100,000 iterations were spent for this malware-tweaking machine-learning software. Moreover, 16% of this corrupting code’s samples successfully skimmed the security measures.
This software will be at the firm’s Github page and Anderson recommended people to try out this mechanism. It’s also a good chance for many security companies to investigate its impact on their products.