16/22

3 steps to maximise AI tools securely


13 Feb 2025

Image: © wacomka/Stock.adobe.com

Hackuity’s Pierre Samson discusses the AI adoption curve – from hype to the hard lessons which can hopefully lead to a more secure, impactful approach.

Click here to check out the full series of AI Focus content.

AI has dominated the tech agenda for the last couple of years, as companies rushed to adopt technologies promising efficiency, automation and cost savings.

However, over the course of 2024, reality set in. Data quality issues, rising costs and security concerns have made many organisations rethink their approach to AI integration. We’re now in a phase where businesses are reassessing whether these tools can deliver long-term value as the returns they expected aren’t materialising as quickly as anticipated.

At the 2024 Data and Analytics Summit, Gartner predicted that 30pc of all generative AI (GenAI) projects will be scrapped by 2025 due to a lack of tangible return on investment (ROI).

Despite this, AI still holds tremendous potential if businesses adopt a measured approach to tool selection and take precautions to ensure that security is at the heart of their decision-making process.

Evaluating AI tools

One thing is clear: cyber criminals have always targeted new technologies. From the outset, your security team must be involved in the evaluation of new software, from selection through to deployment. Without that oversight, new AI tools can introduce significant security vulnerabilities, especially when integrated into existing infrastructures. By asking the right questions upfront, security leaders can ensure that AI tools align with both operational needs and security standards.

One of the most important aspects is evaluating how the tool processes, stores and transmits data. Security teams must determine if the tool needs access to sensitive or personally identifiable information (PII) and examine the encryption methods used for data both at rest and in transit. Vendors should provide strong privacy policies, and organisations should ensure there are adequate data hygiene measures in place, including anonymisation and regular data purging to reduce risks.

The security fundamentals that apply to any software must be in place to prevent unauthorised access or exploitation, and teams need to ensure that the AI they deploy follows stringent security measures to avoid unauthorised access. This includes implementing multifactor authentication (MFA), applying role-based access control and maintaining thorough user activity logs.

Every robust cyber risk management programme must include vulnerability management processes to ensure that software flaws are identified and remediated. However, in the rapidly evolving AI space, this presents more of a challenge as the traditional practice employed for assessing risk and applying patches is not as clear cut, particularly for highly bespoke AI tools.

While vulnerability disclosure frameworks are revised, there is a greater onus on security teams to assess the risks and ensure that AI tools are incorporated into their vulnerability management programmes.

Navigating compliance challenges

Navigating the unpredictable regulatory landscape is a significant challenge in AI adoption as data must still adhere to strict data protection laws depending on the state, industry type and data sets – such as GDPR, CCPA or HIPAA – to ensure sensitive information is safeguarded.

Companies operating in the EU also need to account for the new EU AI Act, which came into force in August 2024. The Act establishes a need for greater transparency from AI developers and vendors, and also seeks to categorise tools based on risk. Higher risk use cases, such as medical diagnosis, will face stricter requirements around how data is gathered, used and protected. Knowing where developed and deployed tools fall in this risk category is imperative to compliance.

Establishing guidelines

Despite the challenges, many organisations are pushing ahead with AI projects, determined to unlock their potential. Security teams have had to adapt quickly to provide the right guidelines.

This requires cross-department collaboration, and security teams will need to work closely with other key business units including HR and legal to identify risks and guide AI projects, especially around data privacy.

Guidelines should clarify which tools are acceptable for business use cases, how these can be applied and what data is appropriate for input. Clear policies will reduce the risk of unsafe AI tools finding their way into the business environment, or usage causing legal and compliance issues.

Ultimately, AI offers tremendous advantages, but only if security teams remain vigilant and proactive in their oversight. By integrating AI into a strong security framework, organisations can reap the rewards of innovation without sacrificing safety.

By Pierre Samson

Pierre Samson is chief revenue officer for cyber vulnerability management company Hackuity. He has 20 years’ experience helping enterprises digitally transform and improve their cybersecurity posture.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.