Microsoft has launched an AI assistant called Security Copilot, which uses OpenAI software and can integrate with other Microsoft Security services.
Microsoft is doubling down on its AI push, with a new cybersecurity product that uses the generative AI capabilities of GPT-4.
The tech giant has revealed Security Copilot, a Chat-GPT style virtual assistant that uses AI to assist cybersecurity teams.
Microsoft said this system combines OpenAI’s GPT-4 with its own “security-specific model”, which receives more than 65trn daily signals by Microsoft’s global threat intelligence.
This helps the security assistant detect issues that may be missed by humans and can be used to address the current talent gap in the sector, according to Microsoft.
“When Security Copilot receives a prompt from a security professional, it uses the full power of the security-specific model to deploy skills and queries that maximise the value of the latest large language model capabilities,” Microsoft said in a blog post.
“In a typical incident, this boost translates into gains in the quality of detection, speed of response and ability to strengthen security posture.”
The company said its Security Copilot integrates with other Microsoft security products, with plans to expand the Copilot over time to a “growing ecosystem of third-party products”.
As a measure of data protection, Microsoft claims the data of organisations that utilise Security Copilot will not be used to train foundation AI models.
”Security Copilot doesn’t always get everything right,” Microsoft said. “AI-generated content can contain mistakes. But Security Copilot is a closed-loop learning system, which means it’s continually learning from users and giving them the opportunity to give explicit feedback with the feedback feature that is built directly into the tool.
“As we continue to learn from these interactions, we are adjusting its responses to create more coherent, relevant and useful answers.”
In January, experts predicted that AI systems have the potential to shake up the cybersecurity sector by improving defences and creating new possibilities for criminals.
A recent Europol report claims AI chatbots such as ChatGPT can exacerbate problems of disinformation, fraud and cybercrime.
Earlier this year, researchers at cybersecurity company Check Point claimed they found multiple examples of criminals sharing malware created with the help of ChatGPT on hacker forums.
10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.