Home / Industry News / Why CTOs need to get to grips with responsible AI (before it’s too late)

Why CTOs need to get to grips with responsible AI (before it’s too late)

/

Where do you stand on the question of responsible AI?

Are you with the 49% of leaders who see responsible AI as key to driving revenue; do you side with the 24% who say it is simply a “cost of doing business” 1, or are you somewhere in between?

If you want to know why it matters and what legislation is coming from the EU and UK, read on. Plus, you can download a free framework for getting to grips with responsible AI.

What is responsible AI?

Let’s start by being clear about what we mean by responsible AI. The International Organization for Standardization states it is “the practice of developing and using AI systems in a way that benefits society while minimizing the risk of negative consequences”.

That means acting on issues such as bias, transparency, privacy, human rights, and the environment. So, why care, and why now?

Intel’s Seven Principles of Responsible AI

Address(Required)

Embedding responsible AI from the outset

McKinsey research found that more than 70% of businesses have already adopted AI2, and that number is growing rapidly.

Investment in AI and cloud services is massive, and CTOs are racing to keep up with new capabilities, opportunities, and risks. Big decisions are being made about foundational approaches that will be expensive to change. It’s far better to build responsible AI into your tech stack and supply chain now rather than re-engineer them in the future.

One challenge is getting to grips with the various aspects of this complex topic. That’s where a clear framework is a great help for organizing your thoughts.

AI regulations in the EU and UK

CTOs are used to complex global legislation, and AI is no different. The 2024 EU AI Act and 2024 UK AI Regulation are two key protocols outside the US. Responsible AI is an integral part of each set of guidance, and specific laws and regulations are being developed.

Businesses need to abide by the laws of the country in which they operate – meaning that some providers could be off-limits in the future.

Could data privacy concerns make a private cloud or Tier 2 provider a safer solution? And what about the supply chain behind AI services and the data centres they run on?

These are factors that CTOs need to consider and key reasons why responsible AI cannot be ignored.

Customers care about responsible AI

Tech leaders are increasingly concerned with protecting trust, and rightly so. People trust businesses more than governments or the media3, but allow AI to undermine that trust, and the damage to your reputation and revenue will be immediate.
AI issues that could damage customer trust include:

  • Biases leading to race and gender discrimination
  • Data leaks and privacy breaches
  • Sustainability concerns about energy use
  • Negative impact on employees, suppliers and society

Discover a Responsible AI Framework to guide your next steps

If you need help in organizing your approach, then look no further.

Intel has developed a framework of seven principles of responsible AI. They are mapped out in its Responsible AI report, along with more information about key approaches from legislators in the EU and UK.

There’s also information on Intel initiatives and products showing how responsible AI can be put into action. And it’s all free!

Sources

1 From Compliance to Confidence report, Accenture, 2024

2 The State of AI, McKinsey, 2024

3 Trust Barometer, Edelman, 2024

Miles Kendall Avatar

This might also interest you .