Why greater strides in AI and data literacy are needed

21 Mar 2025

Image: © Molnia/Stock.adobe.com

BearingPoint’s Ellie Fitzpatrick discusses the growing importance of data and AI literacy in order to best leverage its benefits.

Data and AI are at the top of virtually every company’s agenda right now and for good reason. Data and, by extension, knowledge, is power and the exponential advancement in AI is set to supercharge this power.

But being able to leverage it effectively means proper understanding, literacy and strategising is key. Ellie Fitzpatrick is director of data strategy and enablement at BearingPoint. She began her career more than 20 years ago with a deep focus on data quality management.

Over time, she transitioned into data governance and strategy roles and AI has become an increasingly important tool and driver in the requirement for data strategies. “For data professionals, AI has been key in maturing data management, enhancing data quality, automating data processes and deriving valuable insights from data” she told SiliconRepublic.com.

Building a data strategy

Fitzpatrick said that in order for companies to create an effective AI and data strategy, leaders need to define their objectives and really understand where value can be achieved. This means having an understanding of what others are doing both in and outside their industry.

“A gap or a mistake I often see is that companies think of the data and AI strategy as being in the domain of the technology teams. But developing an effective strategy requires a holistic view of people, process and technology,” she said.

“Data and technology teams have a key role in influencing and shaping the strategy, but it must be a wider activity with executive level support.”

Implementation of any strategy requires effective communication, and an AI and data strategy is no different. Establishing processes that allow teams to regularly contribute to the strategy helps to bring them on a journey and foster a culture of adaptability and innovation.

“Getting into the practicalities, organisations must first understand their current maturity, by conducting a thorough assessment of their data assets, infrastructure and people capabilities,” said Fitzpatrick. “This may reveal the need for investment in scalable data platforms, operating model developments and upskilling programmes.”

Upskilling is another important part of any new strategy, particularly when it comes to AI. Fitzpatrick said that while data and AI-specific skillsets – such as those of data scientists and engineers – are crucial, knowledge of data governance and ethics is also key.

“Additionally, companies should look for individuals with strong analytical skills, domain knowledge and the ability to translate technical insights into business strategies,” she said.

“Last but not least, in recruiting and developing skills and expertise, the most innovative and robust solutions rely on the diversity of the teams involved. Diversity is a broad lens across many different categories, but it is evidentially proven to be the differentiator in successful innovations.”

Insufficient literacy

One of the biggest hurdles for companies leveraging AI and data is a lack of understanding. Now that the flow of data is much less linear, the expectation on every individual to better understand the information being pumped at them is high.

“I’m actually surprised that we haven’t seen greater strides in AI and data literacy. But that’s not to say the situation has stalled, it’s just not moving as quickly as I believe is needed, with a significant gap to bridge,” said Fitzpatrick.

“AI is now integrated into so many aspects of life, from healthcare to finance, to our household items. However, data and AI literacy is a critical skill that hasn’t comprehensively been adopted into our education systems.”

Educating society as a whole will be key to navigating the growing complexities of the digital age and programmes such as Data Smart have been created for this very purpose. But companies need to consider proper literacy at the heart of any data and AI strategy, especially when you consider the possibility that improper use of data and AI could exacerbate existing inequalities.

“Many people are aware of AI’s presence in their daily lives, but a deeper understanding of how it works and its implications is often lacking,” said Fitzpatrick. “Without sufficient literacy, we risk widening the digital divide and creating a society that is unprepared for the demands of a data and AI-driven economy.”

The global AI race

Much of the growth of AI as a technology has played out on a global stage with many companies and geopolitical players fighting to come out on top. You need only look to the disruption of China’s DeepSeek or the recent US Artificial Intelligence Action Plan to see the how much importance it’s being given.

Fitzpatrick said the global race is exciting but only as long as ethical considerations are taken into account. “Organisations must adopt a balanced approach that includes robust governance frameworks, continuous monitoring and stakeholder engagement to mitigate risks and build trust,” she said.

“By focusing on responsible AI practices, we leverage the technology’s benefits while minimising potential harms and avoiding negative reputational impact.”

Among the global chatter are questions around regulation and governance. Those with vested interests are unsurprisingly fighting for looser regulations all under the guise of not hindering innovation.

However, Fitzpatrick said she believes the reality is the opposite. “Regulation and governance are key enablers of innovation, especially with emerging technologies. By providing guardrails and parameters, they encourage action,” she said.

“For instance, the rapid advancement of generative AI has led to exaggerated claims, legitimate concerns and also nervousness. However, regulation and governance can create a level playing field, fostering confidence to innovate and harness the benefits of these remarkable developments.

“Some may disagree, but typically regulation is usually aligned to the level of risk and not a blunt instrument, for example the EU AI Act is risk-based, meaning it is targeted and proportionate. It also gives special considerations to the needs of SMEs and start-ups.”

All of this is a key consideration for companies, which must navigate politically influenced regulatory shifts while maintaining robust internal governance and solid frameworks to futureproof their strategies.

“Fundamentally, governance and compliance should result in building trust and delivering positive outcomes from data and AI use. The impact of breaching consumer trust on brand and reputation is very damaging for companies but is avoidable with a robust and strategic approach.”

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Jenny Darmody is the editor of Silicon Republic

editorial@siliconrepublic.com