GenAI and the golden opportunity: Playing it SAFE

GenAI is here to stay. But just adopting an AI framework isn't enough — companies must ensure platforms are safe, accurate, and ethical.

The market for generative AI (GenAI) is growing exponentially. As noted by Bloomberg, spending on GenAI topped $67 billion in 2023 and is on track to reach $1 trillion by the end of the decade.

However, despite increasing adoption, the Harvard Business Review notes that many companies are hitting roadblocks in GenAI deployment in large part because these tools aren't repackaged versions of familiar automation processes, but rather an entirely new way to interact with, explore, and analyze data.

To make the most of the golden opportunity – and the disruption - triggered by GenAI, businesses need to rethink their approach. This starts with identifying where current data management and analysis output fails to meet demand, then selecting their most likely use cases for AI deployment, and finally implementing a GenAI framework capable of maximizing benefits and minimizing risks.

Data difficulties: Common challenges in current analysis

Traditional data analysis methods excel at producing actionable output from finite, structured data sets. However, the exponential rise of both data sources and types has created a new challenge for companies: data overload.

In a cloud-connected, mobile-enabled world, available data is effectively infinite. What's more, this data doesn't follow predictable patterns. Along with structured information, such as spreadsheets and tables, enterprises must also collect and integrate unstructured data, including consumer sentiment and behavior patterns, along with surveys and social media posts. The result is an overload of data, both in volume and variety.

Organizations also face challenges with staffing and skills. While they recognize that current analysis methods can't keep pace, they're not sure how to effectively leverage new solutions such as GenAI. In many cases, they lack the internal expertise to implement and manage new solutions effectively. As a result, many businesses have a tentative understanding of AI tools — while they recognize the potential benefits, they also feel that GenAI is technically challenging, rapidly evolving, and tentatively risky. 

This leads to common questions and concerns: What if data is exposed? What if outcomes are inaccurate? What are the consequences?

Risk and reward: The rise of generative AI

What began as a curiosity has become a mainstream technology. As a result, many companies are now scrambling to deploy and integrate GenAI solutions so that they aren’t left behind.

The challenge? These self-learning technologies come with both potential rewards and possible risks. 

On the positive side, GenAI tools make it possible to automate work-intensive, highly repetitive tasks. This type of automation is relatively low complexity to implement, making it a great starting point. In addition, generative solutions excel at extracting data from multiple sources and then creating meaningful representations of this data, such as charts, tables or lists.

Large language models (LLMs) combined with natural language processing (NLP) also allow staff to communicate more naturally with AI tools. Rather than following specific guidelines for interaction, users can speak conversationally with these solutions, which helps them get better answers more quickly.

Meanwhile, when it comes to potential risks, three concerns are common.

Data security

Theoretically, the more available data, the better the results produced by GenAI. However, for enterprises, this creates a paradox: if data access is too narrow, ROI may suffer, but if access is too broad, protected data could be exposed. As a result, the current choice for many companies is to simply limit or even block the use of AI tools.

However, companies won't be able to avoid them forever.

Output accuracy

Ask GenAI platforms a question, and they'll likely produce a seemingly well-reasoned and plausible response. The problem? Accuracy is not guaranteed. Depending on the data used and the question posed, AI may deliver entirely confident — and entirely incorrect — answers.

Platform complexity

Businesses are also concerned about potential platform complexity. It makes sense: The more data sources are leveraged by AI, the greater the number of connections required and the more difficult it becomes to ensure transparency and visibility.
Best practice for GenAI implementation While deployments differ in scope and scale, keeping a human in the loop is a universal best practice that applies to any effective AI implementation.

This is critical for AI success because generative tools aren't like their automated process predecessors. Where traditional tools were designed to work with specific data sources using defined rulesets, AI solutions are capable of learning over time.

As a result, the output of GenAI tools can steadily improve as they're exposed to more data sources and create new connections.

This evolving output creates a new operational condition. Instead of automated data collectors, AI tools are more like” team members.” Rather than treating GenAI like static solutions, users can leverage them for dynamic dialogues to help discover new datasets or create new connections. However, the dynamic nature of these tools also calls for increased oversight. While traditional automation processes were effectively "set and forget," regular review of both input and output data is critical to ensure that the new tools stay on track.

SAFE and sound: The Mazars GenAI deployment solution

Mazars can help companies make the best use of burgeoning generative platforms with deployment frameworks that focus on four key areas: safety, adoption, factualness, and ethics (SAFE).


Security ensures that GenAI solutions are compliant with relevant data privacy, protection, and governance standards. It also speaks to access requirements and permissions needed to leverage AI capabilities.


Effective adoption of GenAI doesn't just "happen." It requires the effective integration of new AI tools with existing systems and processes, such as data platforms, pipelines, warehouses, and lakes, along with data analytics.

Factualness AI tools are only as good as their output. Regular review of questions, answers, and data sources can help ensure that content is accurate, consistent, and truthful.


Social values and ethics are not built into AI tools. As a result, effective use depends on the creation and implementation of safeguards that respect the norms and values of both stakeholders and society at large.

GenAI is here to stay. Make the most of evolving AI deployments with Mazars' generative AI enterprise rapid and SAFE deployment solution.

Learn more about the journey to SAFE AI adoption.

The information provided here is for general guidance only, and does not constitute the provision of tax advice, accounting services, investment advice, legal advice, or professional consulting of any kind. The information provided herein should not be used as a substitute for consultation with professional tax, accounting, legal or other competent advisers.


The journey to SAFE AI adoption