Worth Sharing

WS

Stories That Matter

New Mathematical Formula Unveiled to Prevent AI From Making Unethical Decisions

New Mathematical Formula Unveiled to Prevent AI From Making Unethical Decisions
Researchers from UK and Switzerland have developed a mathematical formula to help prevent Artificial Intelligence from making unethical decisions.

Researchers from the UK and Switzerland have found a mathematical means of helping regulators and business police Artificial Intelligence systems' biases towards making unethical, and potentially very costly and damaging choices.

The collaborators from the University of Warwick, Imperial College London, and EPFL – Lausanne, along with the strategy firm Sciteb Ltd, believe that in an environment in which decisions are increasingly made without human intervention, there is a very strong incentive to know under what circumstances AI systems might adopt an unethical strategy—and to find and reduce that risk, or eliminate entirely, if possible.

Artificial intelligence (AI) is increasingly deployed in commercial situations. Consider for example using AI to set prices of insurance products to be sold to a particular customer. There are legitimate reasons for setting different prices for different people, but it may also be more profitable to make certain decisions that end up hurting the company.

The AI has a vast number of potential strategies to choose from, but some are unethical and will incur not just moral cost but a significant potential penalty if regulators levy hefty fines or customers boycott you – or both.

That's why these mathematicians and statisticians came together: to help business and regulators by creating a new "Unethical Optimization Principle" that would provide a simple formula to estimate the impact of AI decisions.

As it stands right now, "Optimization can be expected to choose disproportionately many unethical strategies," said Professor Robert MacKay of the Mathematics Institute of the University of Warwick.

"The Principle also suggests that it may be necessary to re-think the way AI operates in very large strategy spaces, so that unethical outcomes are explicitly rejected in the optimization/learning process."

They have laid out the full details in a paper bearing the name "An unethical optimization principle", published in Royal Society Open Science on Wednesday 1st July 2020.

"Our suggested ‘Unethical Optimization Principle' can be used to help regulators, compliance staff and others to find problematic strategies that might be hidden," said MacKay. "(The) inspection of which should show where problems are likely to arise and thus suggest how the AI search algorithm should be modified to avoid them in future."

Reprinted from Warwick College

Be Sure And Share This Intriguing Solution With Math Lovers On Social Media…

About author

Be the first to comment

Leave a Comment