SAS’s Oliver Penel explains that without trust, there is no adoption and consequently no value to AI.
Research suggests that 56 percent of executives globally have slowed AI adoption due to concerns about how the use of such technology can negatively impact their brand and result in a loss of stakeholder and customer trust.
These concerns have given rise to the concept of Responsible AI (RAI), which refers to the way organisations use AI technologies and adhere to principles that relate to the greater good, the protection of individuals and rights, and generally the trustworthiness of the AI application.
Olivier Penel, SAS data and analytics strategic advisor, says, “It is all well and good to say that the use of AI must be fair and impartial, but how does that translate into actionable guidelines for teams to implement. Safeguards, standards, and best practices should be carefully defined so all involved know what is expected from them.”
Even though AI makes virtually anything possible, it does not necessarily imply there should be no boundaries and the rapidly evolving regulatory environment is providing businesses with the legal parameters on what they can do with AI.
Companies must therefore act responsibly to establish guidelines and principles to what they can and cannot do with their data. This is where bias also has a role to play.
Understanding bias
“Bias talks to the impartiality of the decisions being made and is something that must be considered across the lifecycle of the data. Companies must therefore mitigate the risk of bias taking place. They must be proactive in selecting training data sets that are representative of the population that the AI system will be used for,” says Olivier.
For instance, when building a recruitment tool, is it a case of trying to find the best possible job candidates or trying to find people like the ones the business already has in place. This means that the problem and business goals must be defined, and any sensitive variables and proxies be removed.
Furthermore, companies can check if the model used is behaving consistently with different groups of people. Throughout the process, the organisation can monitor the impact on people and address bias.
Being human
Olivier adds, “When it comes to RAI, the important thing is to put the human back into the equation. There is a difference between automated decision-making and aiding the decision-making process. Companies must therefore structure the use, deployment, and implementation of AI technology with a people-centric approach in mind.”
It is critical to avoid handling RAI as an afterthought and to embed RAI principles at the outset. Measures need to be put in place to monitor bias throughout the process with specific tests to continually evaluate how everything is being analysed.
“Even so, one of the most significant risks is to only think of what could go wrong with AI and not consider all the benefits the technology can deliver. And people should not be blinded against the technology,” says Olivier.
It is clear that human-centricity is a key component, although areas like personalising website navigation and product prompts can be functional without requiring human intervention.
“RAI is about building trust, with employees, partners, customers, stakeholders, and without trust, there is no adoption, and without adoption, there is no value delivered. AI can bring tremendous value to people, to the environment, and to society at large, but it cannot go unchecked. Ultimately, AI should serve our needs and humans should be part of the equation,” concludes Olivier.