Your Data Already Knows the Answers — You Just Can’t Ask It Yet

Why private AI models are the next evolution of eCommerce integration — and why New Zealand businesses should be paying attention right now.

Here’s something I see almost every week.

A business owner sits in front of a screen full of dashboards, reports, and spreadsheets. They’ve invested serious money in their ERP, their CRM, their eCommerce platform. They’ve integrated those systems — the data flows. Orders sync. Inventory updates. Financial records reconcile.

And yet, when they need to answer a simple question — “Which customers haven’t reordered in 90 days and usually order by now?” — they still have to ring someone in accounts, who has to pull a report, who has to cross-reference it with something else, who gets back to them maybe tomorrow. Maybe next week.

The data was there the whole time. The systems just don’t let you ask.

That’s about to change.

The Rise of Private AI

You’ve probably heard about ChatGPT, Copilot, and the wave of AI tools reshaping how people work. But here’s the thing that most businesses miss: the really transformative use of AI isn’t in generating marketing copy or summarising meeting notes. It’s in letting people talk to their own data, in plain English, and get real answers.

I’m not talking about a generic chatbot. I’m talking about a private AI model — one that runs inside your own infrastructure, looks only at your business data, and answers questions that no pre-built report was ever designed to handle.

The technology to do this is called a Large Language Model (LLM). Tools like Ollama now let you run powerful open-source models — Meta’s Llama, Mistral, DeepSeek, and others — on your own server. No data leaves your building. No cloud subscriptions. No risk of your sensitive business information being fed into someone else’s AI training pipeline.

For New Zealand businesses, where data sovereignty and the Privacy Act 2020 are real considerations, this isn’t just an advantage. It’s a requirement.

Note on the Privacy Act: The IPP3A amendment comes into force on 1 May 2026, introducing new obligations around how personal information is collected and processed. If you’re using AI with personal data — and most businesses are — this amendment is directly relevant. Running a private model on your own infrastructure puts you in the strongest possible position to comply.

Why “Private” Matters More Than You Think

Before we go further, I want to address something that doesn’t get enough attention: the security gap between private, locally-run AI and cloud-based models is significant — and it’s growing.

When you send your business data to a cloud AI service, you’re not just accepting a privacy risk. You’re accepting a security risk. Datacentre-hosted LLMs — particularly those used in agentic AI workflows, where the AI takes actions on your behalf — introduce vulnerabilities that simply don’t exist in a private deployment.

One of the most serious is prompt injection: a type of attack where malicious instructions are embedded in data the AI processes, causing it to behave in unintended ways. In an agentic context — where the AI might be querying your ERP, sending emails, or updating records — the consequences can be significant.

This is precisely why a growing number of international government departments have outright banned cloud-based AI in the workplace. The risk isn’t theoretical. It’s the reason organisations handling sensitive data are moving toward private deployments.

A private LLM, running on your own infrastructure, eliminates this entire attack surface. Your data never touches an external server. There’s no API to intercept. No third-party model to manipulate. The AI operates entirely within your control.

A Word on Accuracy: How We Prevent AI Making Things Up

One of the most common concerns I hear about AI is hallucination — the tendency for AI models to confidently state things that aren’t true. It’s a legitimate concern, and it’s worth addressing directly before we go any further.

Standard AI models generate responses based on patterns in their training data. When they don’t know something, they sometimes fill the gap with plausible-sounding fiction. For business-critical queries, that’s not acceptable.

The solution is a technique called RAG — Retrieval Augmented Generation. In plain English: before the AI generates a response, it retrieves the actual data from your systems first. It’s not guessing. It’s finding the real answer, then explaining it to you in natural language.

But for businesses with fully integrated systems — where a single query might span your ERP, your CRM, your logistics platform, and your eCommerce data simultaneously — standard RAG has limits. When conversations get complex and data comes from multiple sources, there’s still a risk of what I’d call “confident hallucinations”: answers that sound right but miss crucial cross-system context.

The next-generation solution is GraphRAG. Where standard RAG retrieves documents, GraphRAG maps the relationships between data points across systems. It understands that a customer record in your CRM connects to an order in your ERP, which connects to a shipment in your logistics platform. That relational understanding dramatically reduces errors when queries span multiple systems — which is exactly the kind of query that delivers the most value.

For an integration-first AI deployment, GraphRAG isn’t a nice-to-have. It’s the architecture that makes cross-system intelligence reliable.

What This Actually Looks Like

Imagine this: you walk into work on a Monday morning. Instead of opening four different systems and running a series of reports, you open a chat interface and simply ask a question.

Sales and Customer Intelligence

“What were my top-selling products last month, and which ones had the highest return rate?”

The AI queries your eCommerce platform, your ERP, and your CRM. It comes back with a clear, conversational answer — and might flag that one product had a 15% return rate because of a sizing issue noted in three separate customer service tickets.

“Show me all customers in the South Island who placed orders over $5,000 last quarter but haven’t ordered this quarter.”

No spreadsheet exports. No cross-referencing. One question, one answer, thirty seconds.

Operations and Inventory

“What’s my current stock level of Product X across all warehouses, and based on current sales velocity, when will I run out?”

Instead of logging into the warehouse system and manually calculating burn rate — the AI does it in one go.

Finance and Compliance

“Which supplier invoices from the last 60 days don’t match the corresponding purchase orders by more than 5%?”

“Show me all credit notes issued in March and flag any that weren’t linked to a return or complaint.”

The finance team stops hunting through transaction records and starts asking the questions that actually matter.

Executive Decision-Making

“Compare this quarter’s performance to the same quarter last year across all product categories, and highlight anything that’s moved more than 15%.”

“What would happen to our cash position if the three largest outstanding invoices aren’t paid within 30 days?”

These aren’t pre-defined reports. They’re the kind of questions a business owner actually thinks about — and until now, getting answers meant waiting days or making do with gut feel.

The Numbers Don’t Lie

Nucleus Research found organisations earn an average of $6.20 for every dollar spent on analytics — and that’s for traditional BI tools that still require trained analysts. Tableau found BI implementations deliver an average 127% ROI within three years.

But despite billions spent globally on BI, only 15–25% of employees regularly use those tools (Gartner, Dresner Advisory). The investment pays off — but only for the handful of people who can actually use it.

Conversational analytics changes that equation. Early adopters are reporting 60–80% reductions in ad-hoc reporting requests to IT, 40–70% faster time-to-insight, and analytics adoption doubling as non-technical users gain independent access to data. One analysis estimated a 4–5× ROI in the first year, with positive returns achievable within 90 days.

McKinsey’s research adds further context: knowledge workers spend roughly one full day per week — 20% of their time — just searching for and gathering information. Generative AI has the potential to automate 60–70% of those activities.

And those numbers are for tools that connect to one system. A private AI model sitting on top of fully integrated business data — spanning your eCommerce platform, ERP, CRM, accounting system, and logistics tracking — can cross-reference across all of them simultaneously. That’s a different category of return entirely.

A Note on Hardware Costs

I want to be straightforward here: running a private LLM on your own infrastructure does require an upfront hardware investment — and in the current market, that investment is meaningful.

The global RAM shortage and scarcity of consumer-grade GPUs have pushed hardware costs significantly higher in 2026. This isn’t a plug-in-and-go solution. The entry cost for a capable local inference setup has risen, and any honest conversation about private AI needs to acknowledge that.

That said, the economics still stack up — particularly for businesses already running integrated systems with high query volumes. Cloud AI services charge per token, per query, per API call. For businesses running hundreds of queries a day, those costs compound quickly. A private model, once deployed, runs at near-zero marginal cost.

The right answer depends on your query volume, your data sensitivity requirements, and your existing infrastructure. We work through that assessment with every client before recommending a deployment approach.

How Integration Makes It Possible

At Convergence, we’ve spent over 15 years connecting eCommerce platforms with backend business systems through our CODI platform. We link Shopify, WooCommerce, and Magento sites with ERPs like NetSuite, MYOB, and SAP. We connect CRMs, accounting platforms, inventory management systems — the lot.

That integration work means the data is already flowing. It’s structured. It’s synchronised. And that’s exactly what a private AI model needs to be useful.

Integration is the foundation. AI — built on GraphRAG architecture, spanning your entire business — is the intelligence layer on top.

The Phased Approach: Start Small, Scale Fast

Phase 1: Your Core DataConnect the AI to your integrated ERP, CRM, and eCommerce data. This alone unlocks enormous value — and with GraphRAG, cross-system queries are reliable from day one.

Phase 2: Expand to Other Internal SystemsDocument management, project tools, email archives, support tickets. Each new data source makes the AI smarter without requiring a redesign.

Phase 3: External Data SourcesContainer tracking. Currency rates. Commodity pricing. Competitor data. Weather. Government trade updates. Each new source adds another layer of intelligence — and another multiplier on your return.

The New Zealand Advantage

New Zealand businesses are nimble. Because our businesses tend to be smaller and more tightly run than their counterparts in larger markets, we can implement this kind of technology faster. A mid-market business here can have a private AI model running on their integrated data within weeks, not months — without a team of data scientists or a seven-figure budget.

The models are open-source. The architecture is proven. What’s needed is the integration expertise to connect the dots — and that’s precisely what we do.

What Happens Next

Private AI models represent the most significant shift in how businesses interact with their data since the move from paper to digital.

The research is clear. Traditional analytics delivers strong returns. Conversational analytics multiplies those returns. A fully integrated, private AI layer — one that spans your entire business, keeps your data exactly where it belongs, and uses GraphRAG to ensure accuracy across systems — takes it to another level entirely.

The businesses that move on this will be asking better questions, making faster decisions, and delivering better outcomes for their customers. The ones that wait will be wondering why their competitors seem to know things they don’t.

If you’re already integrated — or thinking about it — this is the logical next step. Start with your own data. Keep it private. Ask the questions your current reports can’t answer.

You might be surprised by what your data already knows.

Mark Presnell is the Managing Director of Convergence Ltd, Auckland-based eCommerce integration specialists. Known as Mr Integration, he and his team have been connecting business systems since 2009 through their proprietary CODI platform. For more information, visit convergence.co.nz

Social media & sharing icons powered by UltimatelySocial