Advancing the future through responsible AI

post-title

Microsoft Africa CTO Ravi Bhat recognises the potential of AI while also advocating for its proper use.

As more businesses are experiencing the benefits of AI and its use becomes more prevalent and ubiquitous, it will become more critical to leverage it in a responsible way, writes Ravi Bhat, chief technology officer and director of commercial solutions area at Microsoft Africa.

Artificial intelligence (AI) is the defining technology of our time. It is increasingly embedded in all applications, platforms and software, and is used in workplaces, home offices, academic institutions, research labs and manufacturing facilities around the world to help everyone from scientists and salespeople to farmers, software developers and security practitioners.

Benefits like the automation of largely administrative and manual tasks, free up time for workers to focus on higher value tasks, for example healthcare professionals would be able to spend less time on administrative tasks and more time with their patients. AI can also detect unusual behaviour on banking accounts and reduce fraud in financial services and enable predictive maintenance in manufacturing. These examples scratch the surface of the advantages AI can offer across almost every Industry.

As more businesses begin to experience these benefits and the use of AI becomes more prevalent and ubiquitous, it will become more critical to leverage it in a responsible way. Responsible AI has in recent years become a key theme in the enterprise AI market, as more companies struggle with challenges in governance, security and compliance.

Using AI responsibly

South Africa has an innovation mindset and is wellplaced to take advantage of AI tools, but it is critical for business leaders not to delegate the responsibility for AI and other technologies elsewhere in the business – they need to understand the risks and opportunities associated with these technologies, and look at the business through this lens and with a technological mindset.

We have developed a set of AI principles to help business leaders understand these risks and opportunities. The principles aim to act as an ethical framework against which AI solutions are developed and deployed for the benefit of all, and are made up of four core principles – fairness, reliability and safety, privacy and security, and inclusiveness. These are underpinned by two foundational principles: transparency and accountability.

Combining these principles to leverage the technology in a responsible way with a technological mindset and understanding of both the risks and opportunities of AI has the potential to not only drive widespread business value, but also create benefits for broader society.

Securing the future of AI

As consumption of products and services built around AI and machine learning (ML) increases, specialised actions must be undertaken to safeguard not only the business and its data, but also to protect its AI and algorithms from abuse, trolling and extraction.

Practising responsible AI by design does not eliminate all risks, but it does encourage organisations, leaders and developers to be clear about any limitations, account for intended uses and potential misuses, and think expansively about how to secure the benefits of a system and guard against its risks.

With the right guardrails, cutting-edge technology can be safely introduced to the world to help businesses accelerate their digital innovation to become more agile, resilient and competitive in an unpredictable economy.

Effective AI regulations

History teaches us that transformative technologies like AI require new rules of the road.

Proactive, self-regulatory efforts by responsible companies will help pave the way for these new laws, but we know that not all organisations will adopt responsible practices voluntarily.

Countries and communities will need to use democratic law-making processes to engage in whole-of-society conversations about where the lines should be drawn to ensure that people have protection under the law.

Effective AI regulations should centre on the highest risk applications and be outcomes-focused and durable in the face of rapidly advancing technologies and changing societal expectations. To spread the benefits of AI as broadly as possible, regulatory approaches around the globe will need to be interoperable and adaptive, just like AI itself.

As AI becomes a critical tool for organisations and individuals to stay productive, improve operational efficiencies and build resiliency to remain competitive, a commitment to listening, learning and improving is paramount.

Wide-ranging and deep conversations are needed as well as a commitment to joint action to define the guardrails for the future. By working together, we will gain a more complete understanding of the concerns that must be addressed and the solutions that are likely to be the most promising. Now is the time to partner on the rules of the road for AI.

Related articles

Top