15/22

A vector illustration of a worker plugging a giant plug into the back of a giant robot's head.
Image: © KanawatTH/Stock.adobe.com

Workplace infrastructure and the rise of the AI agent

13 Feb 2025

Nitesh Bansal discusses the growing popularity of AI agents and why workplace data policies will need to change as a result.

Click here to check out the full series of AI Focus content.

As explained by Nitesh Bansal, the CEO and managing director of digital product engineering company R Systems, AI agents are autonomous models with the ability to learn, perform tasks and make decisions, without the need for constant human intervention. They combine machine learning, natural language processing and reasoning to automate tasks, analyse data and optimise workflows.

“Unlike traditional automation, agentic AI adapts dynamically, enabling proactive problem-solving and multi-agent collaboration through high-level cognitive functions like thinking, reasoning and remembering, like a human mind,” he said.

For companies, particularly those operating within the STEM sphere, agentic AI by virtue of its ability to automate mundane and routine tasks, is becoming crucial to further research and innovation. As noted by Bansal, in areas such as life sciences, AI agents can streamline clinical trials, accelerate drug discovery and bring life-changing therapies to market sooner.   

Through personalised learning platforms, AI agents are also democratising access to STEM education and the tools needed to work effectively in that space. This enables anyone, whether they are a student, a professional or a tech enthusiast, to teach themselves the skills needed to prepare for a role in an industry that is under near constant reinvention. 

If you build it, they will come

When it comes to deploying and using workplace AI agents, there are many challenges, from a lack of skill among staff and poor retention, to limited data quality and a weak understanding of the technology’s true potential company-wide. But for Bansal, the complexity of integration and the growing infrastructural demands are critical issues that are plaguing the industry. 

Citing research from a survey of more than 1,000 enterprise technology leaders and practitioners conducted by Tray.AI, he noted that 42pc of responding companies required eight or more data connections for successful AI agent deployment. This need for high computational power and low-latency networks is often at the core of a company’s success and can put significant pressure on available resources. 

“While some companies have robust infrastructure, many face gaps,” he said. “A recent study found that only 22pc of organisations have architecture ready for AI workloads without modifications. 86pc of enterprises require upgrades to their existing tech stack in order to deploy AI agents.

“It’s important that enterprises consider their need for scalable, cloud-based solutions and access to advanced computing resources,” he explained. “Without them, I anticipate that many organisations will either face delays in deployment or run into issues if they don’t have a robust plan for upgrading their infrastructure in place.” 

To build the infrastructure strong enough to support the full capability of an organisation’s AI agents, Bansal advises companies to invest in a few key areas, such as high-quality data pipelines for collecting, cleaning and preparing information. Robust storage solutions and scalable computing resources are also necessary, as is the ability to integrate existing systems for widespread compatibility.  

Workforce training and a deep understanding of ethical governance will underpin the entire system, as according to Bansal, for AI agents to be free of bias and misuse, there must be clear policies on data, privacy and security. 

Policing policy

For this to happen, he is of the opinion that organisations must be always updating their data policies. Due to the often private nature of the information that is processed by AI agents, companies should strive to update and advance their data policies, in line with changing regulations and improved safety methods. 

“There are laws, such as GDPR and CCPA, that require robust data governance frameworks and ensure privacy and security. In order for organisations to effectively address their data policies, they must first fully assess and plan for updates to these policy changes,” he said.

“This includes conducting a comprehensive data audit to understand their current data landscape, focusing on data sources, management practices and deployment across the business. This audit will identify gaps and areas needing improvement. They should also implement a risk-based approach when developing and deploying AI, assessing whether AI is necessary for specific contexts and identifying potential security threats.”

The continued advancement of AI in the workplace has created new opportunities for the individual, as well as the organisation. In fact, entirely new careers, such as AI trainers, prompt engineers and ethical AI auditors have emerged as popular and exciting new avenues for professionals and companies to explore.

But it also means that there are more opportunities for maliciously-minded people to infiltrate and exploit infrastructure weaknesses, especially in organisations that don’t fully comprehend the steps it takes to safely install, use and maintain agentic AI technologies. 

For Bansal, now more than ever, companies need to ensure that the human element is as skilled and clued-in as the non-human elements so that employees can collaborate with the technology effectively.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Laura Varley
By Laura Varley

Laura Varley is a Careers reporter at Silicon Republic. She has a background in technology PR and journalism and is borderline obsessed with film and television, the theatre, Marvel and Mayo GAA. She is currently trying to learn how to knit.

Loading now, one moment please! Loading