menu close
  • Back

Raji Haththotuwegama
Head of Data and AI - Canon Business Services ANZ
Last updated Wednesday 23 July 2025

In this role, Raji works closely with CBS’ clients to leverage both data and artificial intelligence to assist clients to drive outcomes and realise significant business value. He supports and leads a team of data and AI specialists, providing technical analysis and design reviews while also facilitating knowledge sharing and mentoring.

Prior to joining Canon Business Services, Raji held positions of Head of Data and AI, General Manager – Strategy and Innovation and was also a founder of his own AI startup.
 
With more than 25 years in IT, including a decade in data science and AI, Raji has led numerous programs that have transformed businesses through data modernisation, intelligent automation, and AI.

Quick summary

Most AI projects fail. Up to 80% never deliver their intended outcomes due to common mistakes such as tackling the wrong problems, relying on poor data quality, and ignoring data governance and privacy. Successful AI implementation depends on aligning AI initiatives with business goals, focusing on critical use cases, and making incremental improvements through use of quality data, engineering expertise, and continuous learning.

By addressing root causes early and making sure AI projects are built with the right infrastructure, expertise, and feedback loops, Australian organisations can dramatically reduce AI project failure and achieve long‑term value from artificial intelligence.

Artificial Intelligence implementation in businesses

Artificial intelligence is on every executive's radar. From chatbots to AI Agents, AI promises dramatic gains in productivity, customer insight, and innovation. But here’s the sobering truth: most AI projects still fail.

According to global studies, up to 80% of AI and data science projects never deliver their intended outcomes. Some stall in proof-of-concept purgatory. Others unravel due to poor data quality, user resistance, or spiralling costs. As Canon Business Services ANZ's (CBS) Head of Data and AI, Raji Haththotuwegama puts it, "Many people won't say it out loud, but most AI projects are quietly turned off within six months."

So, why is failure so common, and how can leaders avoid it? Here, we explore the real reasons AI projects fall over and offer guidance grounded in field-tested experience.

The AI hype cycle: Rushing in with no plan

Pressure to act is mounting. Boards are demanding AI strategies, vendors are overpromising, and competitors are making noise. But haste leads to shallow thinking.

"There’s a lot of pressure to adopt this new game-changing technology," says Raji. "You’re rushing into solution mode instead of understanding the problem. It’s a solution looking for a problem."

This is what leads to the "POC graveyard:" a glut of pilot projects that go nowhere. They may demonstrate technical feasibility but lack a clear path to deployment or business value.

Mistake #1: Solving the wrong problem

AI is not a universal fix. Yet many projects begin without understanding what AI is actually good at, or where it adds the most value.

"The first thing to do is really understand the right use case," says Raji. "Understand the strengths and weaknesses of AI and then marry that to your biggest challenges. Don’t be afraid to go for the biggest challenge. Just carve off a small, achievable piece."

The CBS AI Accelerator Workshop helps clients identify and prioritise AI opportunities with real-world impact. The workshop brings together stakeholders from across the business (IT, operations, finance, customer service) to surface pain points and opportunities. Through a structured process, CBS consultants help participants:
Why AI Projects Fail 
"It's not about trying to AI-ify everything," says Raji. "It’s about focusing on the right problems; ones that are valuable, solvable, and scalable."

The output? A prioritised roadmap that aligns technology investment with business value, while avoiding common pitfalls like poor data readiness or low stakeholder engagement.

Mistake #2: Waiting for perfect data

Data fuels AI. But aiming for perfect, enterprise-wide data quality before starting a project is a common trap.

"Instead of solving the entire data quality problem, ask: can we get the right data to solve this problem?" says Raji. "Otherwise, you’ll never get started."

He adds: "What you need is a modern data platform that makes it easy to curate, access, and maintain high-quality data over time."

CBS recommends a phased approach to data modernisation. Start with the data that matters most to your selected use case. Then build incrementally.

Rather than aiming to overhaul the full data estate, organisations should identify the specific data assets that serve a defined use case. From there, the focus shifts to enabling secure, timely, and relevant access.

"Choose a platform," says Raji, "that only helps create quality data and maintains it, enriches it, and makes it accessible to the right people at the right stage of the lifecycle

Crucially, this isn’t just a technology issue. It's about designing an ecosystem where multiple teams, from compliance to marketing to ops, can engage with data at the level they need. Raji’s team helps clients implement platforms that allow curated access, data lineage tracking, and stakeholder-specific insights to make governance and usability seamless.

This phased approach means every investment in data infrastructure is tied to a business outcome. As Raji puts it, "There’s no point building a beautiful warehouse of data if the customer wants something that isn’t even stored there."

Designing AI-ready data environments in incremental, outcome-focused phases enables organisations to avoid common bottlenecks while still laying the foundation for long-term maturity.

Mistake #3: Forgetting the humans

AI isn’t just a technical project. It impacts workflows, roles, and culture. Failure to engage users early often results in low adoption.

"You need to take a human-centred approach. What are the repetitive, boring tasks that AI can take off their hands? How does it make their work more valuable?"

He warns that adoption falters when users feel displaced rather than empowered. Instead, AI should augment human capability, not replace it.

Resistance is often emotional, not logical. If employees feel sidelined or suspect that automation is a prelude to redundancy, their engagement drops. Early involvement, transparent communication, and co-designing solutions with users can mitigate this risk.

Raji advocates involving frontline teams in the project lifecycle, from defining requirements to testing prototypes. His team also works with HR and change management leads to prepare employees for new ways of working.

This is where "human-in-the-loop" (HITL) design matters. Especially for business-critical tasks and high-stakes decisions in sectors like finance, healthcare, or government, AI outputs must be reviewed, verified, interpreted, or actioned by people. And that’s not just a technical design decision. It affects cost, training, and trust, and there’s a high impact if the wrong decision is made. Defining the right checkpoints and escalation paths also ensures trust, accountability, and regulatory compliance.

As Raji says, "When people see how AI can amplify their strengths instead of replacing them, that's when adoption happens."

Mistake #4: Underestimating total cost of ownership

AI may save time eventually. But the upfront and ongoing costs are often underestimated.

"There are algorithms that can solve a problem brilliantly. But if it costs more than the existing process, it won’t get off the ground."

Consumption-based pricing, unpredictable token usage, complicated charging models, human oversight. It all adds up.

Initial proof-of-concept builds are often inexpensive thanks to low code and citizen builder tools. But enterprise-grade deployment introduces a raft of new costs: platform integration, data security controls, curating quality knowledge sources, AI Ops, performance monitoring, and ongoing refinement.

Then there's human review. "People often forget," says Raji, "in business-critical use cases every AI output still needs someone to verify, approve, or act on it. That cost doesn't disappear. It just shifts."

Organisations should model true total cost of ownership (TCO) across all stages, from prototype to production. This includes factoring in software licensing, model hosting, governance, user training and organisational change management.

It also includes the cost of failure: what happens if the model makes the wrong decision?

A realistic TCO model allows businesses to evaluate whether AI driven automation is truly worth it or if a simpler rules-based solution would suffice. It also enables better budgeting and stakeholder confidence.

"You need to calculate what AI automates and the cost of human review, platform management, and continuous refinement. That’s your total cost of ownership."

Mistake #5: AI literacy gaps at the top

AI projects often falter because decision-makers don’t fully understand what they’re approving. "The C-suite often nods along, but if they don’t understand how the solution works or what the limitations are, it can lead to misalignment down the track," says Raji.

He urges leaders to ask three simple questions of any partner:

  1. Have you done this before?
  2. Have you rolled it out to production?
  3. Has it been running at least for 6 months?

"There are a lot of so-called experts. But very few AI solutions have been live in production for over 12 months. The whole field is still new."

Education matters. “We frequently deliver executive briefings and AI upskilling sessions designed to demystify concepts like LLMs, Copilots, and AI Agents. These sessions are tailored to board-level priorities: risk, ROI, and regulatory impact,” Raji explains.

"If you want the right investment decisions, you need literacy at the top," he says. "Otherwise, your projects get hijacked by unrealistic expectations or missed risks."

You can bridge the gap between strategy and execution by embedding data translators, people who can connect business goals with AI solutions. This ensures leadership sets the vision while technologists execute against it.

Mistake #6: Ignoring the pace of change

The AI landscape evolves fast. What was state-of-the-art six months ago might now be obsolete.

"Vendors are pumping out features left, right and centre," says Raji. "You need a technology evaluation process. Is this new feature stable? Secure? Useful?"

Mid-project changes in APIs, pricing models, or regulation can derail momentum. Many teams lack a framework to evaluate updates or determine when to adopt versus wait.

“We encourage clients to build adaptive governance structures—cross-functional committees that regularly assess new capabilities, test in sandboxes, and maintain alignment with security, compliance, and budget controls,” says Raji.

"You don’t want to be rebuilding every quarter because the foundation keeps shifting," he adds. "You need partners who understand the pace of innovation and how to manage it."

Flexible architecture, modular design, and vendor-agnostic tooling are the key to future-proofing your stacks while maintaining delivery cadence.

Mistake #7: No clear path beyond proof of concept

A successful pilot isn’t enough. Many organisations fail to define what success looks like upfront or what happens next.

"If the POC works, are you willing to spend money to deploy it? That’s the question that needs answering before you begin," says Raji. "Map it out. Have a plan. Secure commitment."

Too often, AI pilots are treated as experiments with no clear business owner or operational plan. Teams celebrate model accuracy, but forget about integration, user training, change management, or KPIs.

The answer is designing with deployment in mind. From the outset, define:
  • Who owns the AI solution after it goes live?
  • What systems will it integrate with?
  • How will performance be tracked?
  • What’s the plan if the AI solution underperforms?

These questions aren’t just operational. They’re strategic. A great AI solution with no support structure is a missed opportunity. A functional model with clear governance can deliver value for years.

Don’t just build AI. Build the business around it.

AI is a powerful enabler, but only when applied with rigour, empathy, and a clear eye on business value. The real differentiator? Not the tools, but the thinking behind them.

"We need to stop entertaining fantasies about every company being ‘AI first,’" says Raji. "AI is a tool to deliver outcomes. It’s not the outcome itself."

Canon Business Services ANZ helps organisations move beyond AI hype and into execution, combining deep technical expertise with business pragmatism. Whether you're choosing a use case, modernising your data estate, or validating your POC, CBS brings the strategy, structure and support to make it work.

Ready to explore AI with eyes wide open? Let’s talk about what success really looks like.

Related Services

Frequently asked questions

Why do AI projects fail?

Most AI projects fail due to a lack of careful planning, misunderstanding of data quality, and ignoring critical data governance and privacy needs. According to industry studies (including those by Harvard Business Review), up to 80% of AI projects fail because organisations rush into artificial intelligence implementation without aligning it to their business goals or understanding the root causes of project failure.

What are common reasons AI projects fail?

Common reasons AI projects fail include:
- Poor data quality and insufficient data cleaning 
- Lack of expertise across engineering teams and business SMEs 
- Misaligned business goals and use of new technology 
- Weak data governance and data privacy controls 
- Focusing too much on pilot projects and too little on successful implementation and model deployment

How can organisations reduce AI project failure rates?

To reduce AI project failure rates, organisations must focus on:
- Careful planning and aligning AI initiatives to critical business needs 
- Investing in high‑quality data and data cleaning 
- Establishing strong data governance and privacy practices 
- Building feedback loops for continuous learning and continuous improvement 
- Engaging engineering teams and product teams early and often 
- Focusing AI investments on successful projects and outcomes 

What role does data quality play in AI project success?

Data quality is critical for successful artificial intelligence implementation. High‑quality data allows AI developers to build accurate AI solutions and ensure AI technologies deliver value across business operations. Without focusing on data quality and data cleaning, many AI projects fail due to misaligned or unusable data.

How can businesses ensure successful AI implementation?

To ensure successful AI implementation, businesses must:
- Clearly define project objectives and key results 
- Invest in necessary data and data infrastructure 
- Build expertise across engineering and business SMEs  
- Establish strong data governance, privacy policies, and continuous learning practices 
- Adopt a feedback‑driven approach that allows AI projects to evolve with new ideas and tools 
- Focus on integrating AI within critical workflows to enable successful implementation and long‑term value

Similar Articles

VIEW ALL

AI agents vs automation

Uncover the key differences between AI agents and automation. Learn how each technology can improve workflows and drive smarter decisions for Australian businesses.

AI automation and the future of work

Uncover how AI automation is transforming the future of work in Australia. Learn about the latest trends, impacts on jobs, and strategies to adapt.

A guide on AI fraud detection

Explore how AI fraud detection enhances security of businesses in Australia. Learn about machine learning algorithms, benefits, challenges, and best practices.

Key steps in Application Modernisation

Discover effective strategies for modernising applications within Australian organisations. Unlock insights, tips, and tools to streamline your modernisation journey now.

What are the benefits of machine learning in business?

Explore the myriad benefits of machine learning in business. Learn how ML enhances efficiency and drives innovation for sustainable growth in Australia. Discover now!

Differences between Copilot and ChatGPT

Compare Copilot and ChatGPT to understand their unique capabilities. Explore how each tool can enhance productivity and creativity in different contexts.

Feature comparison between Copilot and Copilot Pro

Compare Microsoft Copilot and Copilot Pro, exploring their features, benefits, and value propositions for organisations in Australia. Read more.

The difference between chatbots and AI agents

Explore the differences between Chatbots and AI agents. Understand their unique capabilities and how they enhance user experiences for Australian businesses.

The features of Microsoft Copilot that empower productivity

Discover how Microsoft Copilot boosts productivity by integrating advanced AI into Microsoft 365. Lear more here.

The future of generative AI

Explore the future of generative AI and its potential to revolutionise industries in Australia. Discover trends and applications in this evolving field. Read more.

What is a Copilot in AI

Discover the crucial role of a copilot in AI. Uncover its significance and enhance your AI development today. Read more.

What are the differences between generative AI and discriminative AI

Explore the differences between generative and discriminative AI. Understand their significance and applications in AI for Australian-based businesses.