WHEN AI IS A TOOL AND WHEN IT’S A WEAPON?

04 December 2019

The immense capabilities artificial intelligence is bringing to the world would have been inconceivable to past generations. But even as we marvel at the incredible power these new technologies afford, we’re faced with complex and urgent questions about the balance of benefit and harm.

When most people ponder whether AI is good or evil, what they’re essentially trying to grasp is whether AI is a tool or a weapon. Of course, it’s both — it can help reduce human toil, and it can also be used to create autonomous weapons. Either way, the ensuing debates touch on numerous unresolved questions and are critical to paving the way forward.

Hammers and guns

When contemplating AI’s dual capacities, hammers and guns are a useful analogy. Regardless of their intended purposes, both hammers and guns can be used as tools or weapons. Their design certainly has much to do with these distinctions — you can’t nail together the frame of a house with the butt of a gun, after all — but in large part what matters is who’s wielding the object, what they plan to do with it, to whom or for whom, and why.

In AI, the gun-and-hammer metaphor applies neatly to two categories: autonomous military weapons and robotic process automation (RPA).

But humans have always been the ones releasing the arrow, pulling the trigger, or pressing the button. The question now is whether to give a killing machine decision-making power over who lives and who dies. That’s a new line to cross — and reiterates the need for human-in-the-loop AI design.

For companies — and even individual workers and teams — RPA can be empowering. But the downside is that automation often obviates existing jobs. Depending on the specific context, one might argue that a company could weaponize automation to gut its workforce, leaving throngs of workers adrift with their hard-won skills and experience suddenly obsolete.

The issue of automation comes up frequently at these events. Many of the new jobs that will emerge after automation require learning what amounts to a trade, rather than earning a degree in computer science. Sometimes those jobs may be in the same field; for example, autonomous trucks could displace truck drivers, but there will still need to be someone on board handling logistics and communications, which is a job that a former trucker may be able to move into with a modest amount of new training. A broad reskilling and upskilling effort can help displaced workers scale up to a better job than they had before.

Back and forth goes the power differential. Automation is a tool, is a weapon, is a tool.

In the murky middle

Between the extremes of worker-aiding automation and killer drones lie almost all the other AI technology, and the middle is murky, to say the least. That’s where a debate about AI becomes most difficult — but also comes into greater focus.

More than any other AI technology, facial recognition has shown clearly how an AI tool can be perverted into a weapon.

It is true that there are many beneficial uses of facial recognition technology. It can be used to diagnose genetic disorders and to help screen for potential human trafficking. Law enforcement can use it to quickly track down and apprehend a terrorist. There are perfectly neutral uses, too, like using it to augment a rich online shopping experience.

The use of facial recognition in policing and sentencing, but also across miscellaneous fields like hiring, is deeply problematic. We know that facial recognition technology is often less accurate applied to women and people of color, for instance, owing to models that were built with poor data sets. Data may also contain biases that are only magnified when applied in an AI context.

And there are reasonable moral objections to the very existence of facial recognition software, which has led multiple U.S. cities to ban the technology. Democratic presidential candidate Senator Bernie Sanders (I-VT) has called for a ban on police use of facial recognition software.

All of the above calls into question the responsibility tech companies bear for selling facial recognition technology to governments.

Is the journey the destination?

Because AI technologies can feel so huge, powerful, and untamable, the challenges they introduce can feel intractable. But they aren’t. A pessimistic view is that we’re doomed to be locked in an arms race with inevitably severe geopolitical ramifications. But we aren’t.

Structurally speaking, humanity has always faced these kinds of challenges, and the playbook is the same as it ever was. Bad actors have always and will always find ways to weaponize tools. It’s incumbent on everyone to push back and rebalance. For today’s AI challenges, perhaps the best place to start is with Oren Etzioni’s Hippocratic Oath for AI practitioners — an industry take on the medical profession’s “do no harm” commitment. The Oath includes a pledge to put human concerns over technological ones, to avoid playing god (especially when it comes to AI capabilities that can kill), to respect data privacy, and to prevent harm whenever possible.

Cloud computing is a strong example of what was a seemingly intractable new problem. Companies like Microsoft, alongside the governments of multiple nations, had to grapple with rethinking how cloud computing affected international borders, law enforcement jurisdictions, and property ownership and privacy rights of private citizens.

Though the CLOUD Act saga was particularly complex and protracted, the fundamental challenge of addressing new problems created by technological advances comes up multiple times throughout Tools and Weapons, around cybersecurity, the internet, social media, surveillance, opportunity gaps caused (and potentially solved) by technologies like broadband, and AI. In Smith’s retelling, the process of finding solutions was always similar. It required all stakeholders to act in good faith. They had to listen to concerns and objections, and they had to work together, often with rival companies — and in many cases, multiple international governments — to craft new laws and regulations. Sometimes the solutions required further technological innovations.

Dealing with AI and its promises and problems requires moral outrage, political will, responsible design, careful regulation, and a means of holding the powerful accountable. Those in power — in AI, primarily the biggest tech companies in the world — need to act in good faith, be willing to listen, understand how and when tools can feel like weapons to people and have the moral fortitude to do the right thing.

 

The full article can be found on VentureBeat

 

SWISS IPG PARTNERS GROUP, founded and headquartered in Switzerland, is an international thought factory and top management consultancy driven by senior partners with long-term experience in BUSINESS TRANSFORMATION.

IPG stands for value creation through BUSINESS MODEL INNOVATION combined with PERFORMANCE EXCELLENCE along all three axes “INNOVATE – PERFORM – GROW”.

We merge state-of-the-art performance improvement measures with business model innovation in a holistic approach to take value creation to the next level.

CONTACT INFO

SWISS IPG PARTNERS GROUP AG

Hotelstreet 1, P.O. Box 311
CH-8058 Zurich Airport, Switzerland

www.SWISS-IPG.com
office@SWISS-IPG.com

Zurich
CUSTOMERS AND REFERENCES

Slider