This ERP Expert Turns Down Million-Dollar AI Projects — Because She Knows What Regulators Will Find

Mariana Tataryn stopped an automation project cold at a major organization. The AI system looked fine on the surface—but she saw it would lock compliance violations into the code. A year later, when auditors came knocking, her decision saved the company from an investigation that would have cost millions.

Mariana Tataryn
Mariana Tataryn

In December 2024, the European Union enacted the AI Act—the world's first comprehensive legislation regulating artificial intelligence. The U.S. is drafting its own version. China is already enforcing regulations. Companies are pouring billions into AI, but many of these initiatives are ticking time bombs—they function until someone starts asking how decisions are really being made.

Mariana Tataryn is one of the experts who can spot these bombs before they explode. She's halted multiple multimillion-dollar projects—not out of personal ethics or philosophical belief, but due to risk.

"Ethics isn't some abstract idea for me. It's operational risk. If a decision can't be explained to auditors or regulators, it's unacceptable," she says.

Bridging Multiple Worlds

Mariana, currently a manager at Deloitte, is designing financial systems with embedded AI for organizations with billion-dollar budgets. A Certified Public Accountant (CPA) and Certified Fraud Examiner (CFE), she's worked on dozens of projects across the U.S., Canada, Poland, and Ukraine.

Her career moved fast: less than a decade from operational finance to leading transformation projects worth hundreds of millions. She's managing work that usually goes to senior consultants with twice her experience—and this is just the beginning.

She sits at a rare intersection: technology, finance, and compliance. IT professionals understand the algorithms, but often miss the regulatory implications. Finance teams know reporting, but don't see what's happening inside the model. Legal teams know the law, but not the data. Mariana sees the whole picture.

Saying "No" to a Client

One project she turned down involved automating a financial process for a major organization. The client wanted to implement AI "as is"—simply taking the existing process and automating it with machine learning. The pitch was clean: save time, cut manual work, boost efficiency.

But when Mariana analyzed the process, she uncovered a problem. The existing process included control and compliance violations—not critical, but serious enough to raise red flags with regulators.

"Automation would have locked those violations into the code. Instead of fixing the issue, we would have made it permanent and invisible. It would create an illusion of control—while actually increasing risk."

She insisted that the project be halted. The client was unhappy—the automation had already been budgeted, and results had been promised to leadership. Stopping the initiative meant admitting the underlying processes were flawed.

"I told them: no. We're not doing this. Sometimes that costs you the project. But reputation is worth more."

The project was reworked. First, the flawed processes were corrected, and proper controls were implemented. Only then was the revised process automated. It took longer and cost more.

A year later, the organization underwent an audit. The auditors went deep. If they'd gone ahead with the original plan, auditors would have found the violations—and the investigation would have cost ten times what the project delay did.

"The client later thanked me. But in the moment, I looked like the person slowing things down."

Three Principles That Save Projects

Through dozens of AI projects in finance, Mariana developed three core principles that determine whether a project should move forward.

First: explainability.

"The model said so" won't cut it with auditors. Financial reporting must be verifiable. If AI recommends reclassifying a transaction or adjusting a forecast, the reasoning needs to be clear.

"I saw a multimillion-dollar project fail for exactly this reason. The model worked great in testing, but once audits began, no one could explain the decisions. The project was shut down."

Second: fairness.

Models are trained on historical data. If that data includes systemic bias—for example, if the company historically underreported certain categories of expenses—the model will replicate the error and frame it as "scientific."

"Dirty data isn't just a technical issue. It's a compliance risk. No matter how advanced the technology, garbage in means garbage out."

Third: accountability.

There must always be a human responsible for a decision. AI can recommend, analyze, alert—but final decisions must rest with someone who understands the context and consequences.

"Trying to automate everything is a major cause of failure. People think: if we have AI, let's let it decide everything. In finance, that's unacceptable."

What Doesn't Work — and How to Fix It

Mariana has worked with dozens of organizations implementing new systems, tools, and AI in financial operations. The reasons for failure repeat themselves.

The black box problem keeps killing projects. A model spits out a recommendation but won't say why. For marketing, maybe that's fine. In finance? Dead on arrival. Auditors ask: What's the basis for this decision? "The algorithm said so" doesn't fly.

Tech without governance is another trap. Companies rush to implement AI and think about controls later. "Let's launch now and sort it out later" is disastrous in regulated environments.

Team silos create blind spots. IT doesn't understand compliance. Finance doesn't understand the technology. Legal doesn't understand the data.

"In the public sector and regulated industries, these teams operate in bubbles. No one sees the full picture."

Dirty data sinks more projects than people admit. Most skip data validation because it's boring and time-consuming.

"I've stopped projects when I saw the data was dirty. Clients said: let's launch and clean it later. No. You clean first—then you launch."

Focus on launch, not lifecycle, is the last major pitfall. Organizations fixate on getting the project out the door. But then what? What happens a year from now? How will it be audited?

Here's what actually works:

  • Build explainability from day one. In regulated industries, explainable-by-default models aren't optional—they're the baseline.
  • Embed compliance from the start. Controls need to be baked into the architecture, not bolted on at the end.
  • Break down the silos. IT, finance, and legal must work together from day one—not just show up for status meetings.
  • Validate data before training models. This seems obvious, but it's the step most projects skip.
  • Think lifecycle, not just launch. Plan for operation, audit, and updates—not just deployment.

Ukrainian Tech Meets Western Regulation

Alongside her Deloitte work, judging startup battles and pitches, and advising tech startups in Silicon Valley, Mariana advises Ukrainian tech startups entering the North American market.

"Ukrainian teams are technically strong. But they often underestimate compliance risks. What's acceptable in Ukraine can be a violation here."

One Ukrainian startup wanted to collect certain user data to train its model. Mariana pointed out that this would directly violate GDPR and Canadian privacy laws. The data architecture was redesigned before launch.

A Call to the Industry

The AI boom keeps going. Companies invest, pilot, scale. But without ethics, these projects become liabilities—legal, financial, reputational.

Mariana's story shows: walking away from an unethical project today can save millions tomorrow. This isn't about being a "good person." It's about managing risk.

"Ethics must be built in from the start—not patched on at the end. If you're thinking about compliance at the deployment stage instead of during design—you've already lost."

Regulations are tightening. The EU AI Act is just the beginning. In a few years, the projects that seem "good enough" today will face strict requirements around explainability, fairness, and accountability.

The companies that embed ethics now will gain a competitive edge. The rest will face fines—and lose trust.

ⓒ 2026 TECHTIMES.com All rights reserved. Do not reproduce without permission.

Join the Discussion