Where is the Unified Global Approach to Ethics in Robotics and AI: If We Can’t Get Along Globally, How Will Our Artificially Intelligent Off-Springs?


Flying Ship (opens in a new window) Where is the Unified Global Approach to Ethics in Robotics and AI:  If We Can’t Get Along Globally, How Will Our Artificially Intelligent Off-Springs?

Maureen A. Majury, M.Ed.

September 30, 2016
 

Preface

Debate, thought, innovation, and creation of standards exist for determining the ethical behavior of robots and artificial intelligence (AI), in all its coming iterations and forms. Experts in computer science, economics, as well as other fields, speculate about, and scan data on workforce disruption, lost jobs, and robots becoming “More Human than Human.” (White Zombie, 1995)

Ethics continue to play a large part of how robots and AI roll out currently and integrate into our societies and the workplace in the near future.  How will robots make ethical decisions?  Everyone’s joining into this discussion and opining without much mutual agreement.  And, every country and professional society or association is quickly planting its stick in the ground and roundly stating, “This is what robotic and AI ethics look like and should be.”

The United Kingdom (UK) recently published the new British Standards Institution (BSI)’s Guideline.  It develops technical and quality guidelines for goods sold in the UK (BS 8611 (opens in a new window) ).  The cost in US dollars for this report?  $210.   But who are the noted computer scientists, economists, and great minds behind, Robot Ethics According to About 25 People in the UK?  Unless you buy it you don’t know.

Although robot and AI ethics are the document’s focus, from the free synopsis, it does not delve into a required criterion of detailed behaviors.

What’s ultimately lacking in this, and any other piece or recently formed groups founding principles, are a global agreement on what constitutes ethics and a shared belief system that would be an integrated part of any robot or AI decision-making process.

What might be missing in this discussion…

Ethics, core values, cultural beliefs, religious identification, political selection, morals are all forged not just within a distinct country, but within regions of the country itself.

Each country has its own distinct views of what forms a socially accepted grouping of ethics and values, and, thus, the decision-making process for the majority of contributing member of a distinct society (or, country).  Even if a majority accepts these values and ethics, there is most likely a minority that doesn’t agree, rightly, wrongly, or indifferently.  Also, unifying ethics and values may be in conflict within a country or between differing regions within a country, let alone within a state.

Unquestionably, robots will transition from unthinking, rote automatons into creative, analytical, and contributing members of a country, region, state, or county.

So, we come to a quandary.  Who determines the values and ethics, the moral decisions, that will drive a functioning robot/AI as action-taker or decision-maker across the globe whilst existing in a society?  While the UK has the shadow decision-makers of the British Standards group, who will only share what constitutes decision-making for a robot from the age old monotone of Asimov, “Do no harm to humans,” one must agree much larger questions loom.

Globally, conflict is based upon differing economic, religious, moral, historical memories, and political beliefs, which expand into views on ethnicity, gender, race, sex, etc.  What if every country programs robots and AI to match its majority values and opinions on things like the value of a male child over a female child, the value of a factory worker over that of a highly effective financial broker, the value of a dog versus a cat?  This presents a rather large conundrum over what constitutes the best value system of that country, let alone the globe, on who and how one should program the robot.  And, who should be looking over their shoulder.

We could look at a situation where a homeless person is asking for money. Many might wonder how the money would be used.  Some might examine the words on the homeless person’s cardboard sign.   Others might look at the physical appearance of the destitute individual.  These considerations and decisions must be decided upon when programming a robot.  Because the question we all will want an answer to is, will the robot give the homeless person money.

Other examples confront those in choosing the best value system to program a robot.  What constitutes an “ethical kill,” for example?  Does that term really even exist in the field of ethics, or is it a term created for the justification of military action.  One doesn’t know.

Or what if one country considers theft a minor crime, but in another it is considered a capital offense?   Adultery may be grounds for counseling or divorce in one country, but what if in another it’s grounds for flogging?

So, in the end, is each country going to create more indestructible, resolute, rigid robotic/AI versions of its own majority?  If we can’t agree on what constitutes a shared set of ethics and values globally, what does this foreshadow for commonality as the “Ethics Policy Monitors” (both non-profits and big tech companies) are rapidly starting to individually assemble in each country to determine what’s acceptable behavior for robots and AI-entities?

…The sadness of watching the future unfold is the remembering.


The owner of this website has made a commitment to accessibility and inclusion, please report any problems that you encounter using the contact form on this website. This site uses the WP ADA Compliance Check plugin to enhance accessibility.