Never Fear AI: How to Successfully Regulate the Technology
Tue, April 20, 2021

Never Fear AI: How to Successfully Regulate the Technology

There are recurring fears that AI will invade people’s privacy, impoverish people, lead to extreme surveillance, and ultimately control humanity. Setting them aside, let’s talk about AI regulation to hopefully minimize our fear of AI / Photo by: Kamil Macniak via 123RF

 

Elon Musk warned that AI is “summoning the demon,” as quoted by Tom Wheeler of Brookings, a non-profit policy organization in Washington DC. Musk’s cryptic statement serves as a continuation of a tradition of fearful warnings about new technologies, which can be traced all the way back to ancient times. In the 16th century, the Vicar of Croyden said Gutenberg’s so-called demonic press will destroy the faith. A few hundred years hence and an Ohio school board stated that the new steam railroad technology was Satan’s device to draw immortal souls to hell.

We can laugh at these stories now, but history is repeating itself. Currently, we are now growing fearful of AI. There are recurring fears that AI will invade people’s privacy, impoverish people, lead to extreme surveillance, and ultimately control humanity. Setting them aside, let’s talk about AI regulation to hopefully minimize our fear of AI.

 

An Interesting Survey on AI Regulation

A 2019 study conducted by the Center for the Governance of AI and Oxford University’s Future of Humanity Institute found that Americans have mixed support for the continued development of AI, cited Karen Hao of MIT Technology Review, an authoritative source on technology news. It also found that an overwhelming number of Americans agreed that AI should be regulated. More Americans support than oppose AI development, but either way, there isn’t a strong consensus.

In fact, 28% of the respondents said they somewhat support AI development. The same percentage of individuals said they neither support nor oppose its development, 13% of Americans strongly support and somewhat oppose AI development, 9% strongly oppose the development of AI, while 10% admitted they don’t know.

When asked if they believe that high-level machine intelligence will be more harmful than good, 21% said they are “more or less neutral” and “on balance good.” Alternatively, 22% of Americans said, “on balance bad.” Only 5% said high-level machine intelligence will be “extremely good.” Further, 12% noted that it is “extremely bad, possibly human extinction.” Then there were 18% of respondents who answered, “I don’t know.”

Americans want better governance as more than eight in 10 of the respondents believed that AI and robotics “should be managed carefully,” with 52% answering “totally agree” and 30% reported, “tend to agree.” On the other hand, 5% of respondents answered “tend to disagree” while only 1% totally disagreed with the careful management of AI and robotics. Interestingly, 12% of Americans admitted that they don’t know whether the said technologies should be responsibly managed or not.  

What’s even more fascinating is that none of the federal and international agencies, companies, non-profits, and universities received more than 50% of the respondents’ trust for the development and responsible management of AI. The US military and university researchers received the most trust, with 14% and 17% of respondents respectively showing a “great deal of confidence” to these entities with regard to building AI.

Allan Dafoe, director of the center and co-author of the report, noted that while AI is beneficial, there has to be a “broad legitimate consensus around what society is going to undertake.”

Get Rid of the Fear

Research to create human-like AI is not progressing due to fears of the apocalypse.  Andrew Moore, former dean of computer science at Carnegie-Mellon University, said, “We’ve pretty much stopped trying to mirror human thinking out of the box.” It is important not to substitute fear in place of solution-oriented thinking. 

A solution-oriented thinking involves asking questions, identifying issues, and finding solutions. If we think that machines do not pose an existential threat to humanity, then how do we move beyond hysteria to focus on the practical aspect of machine intelligence?

It can be intimidating, but machine learning is already coming to describe the whole computer science field. Frankly, it is difficult to discover any activity within computer science that is “outside of machine learning’s manipulation of data.”

Since machine learning and AI are part of our current reality, we must establish a delineation of the issues that need to be solved.  We must also focus on dealing with the effects of new technologies rather than regulating an amalgam of technologies we refer to as AI.

Research to create human-like AI is not progressing due to fears of the apocalypse / Photo by: boscorelli via 123RF

 

Focus on the Effects

Effective regulation focuses on the effects of new technology rather than on the technology itself. If we look back at history, railroad tracks and switches were not regulated, but rather we regulated the effects of their usage. We did not regulate telegraph and telephone wires but determined whether or not access to these devices were just and reasonable. With AI, we should focus on its tangible effects.

“There are real concerns about the potential negative consequences of AI, from deepfakes to nefarious uses of facial recognition,” noted Google CEO Sundar Pichai, as quoted by Liam Tung of ZDNet, a business technology website. Of course, the industry is already working on these issues, but there will be more challenges that no company or industry can solve alone.

But in reality, that’s easier said than done. AI has a vast reach due to its ability to quickly access and process volumes of data that will help drive our cars, cook food in our kitchens, and more. Also, AI is quickly advancing considering its expansion has been perceived as “exponential.” This is another challenge to regulating AI.

Nevertheless, regulators should tackle not only AI’s effects, but also determine acceptable behavior and identify the people or entities that will be responsible for overseeing those effects.  

Regulate Industry Expert Advice

Governments and policymakers need to collaborate with experts from various industries, wrote co-founder and CEO at Antworks Asheesh Mehra via Information Age, a platform for CTOs and technology leaders. These experts will advise the decision-makers on policy and regulation on best practices. Such practices may address what the technology is for, how they will make it work, how it may affect the workforce, and the like to guarantee a seamless transition to an AI-enabled business.

Companies need to be optimistic that decision-makers will listen to their concerns to effectively regulate the applications rather than limit their usage.

Governments and policymakers need to collaborate with experts from various industries, wrote co-founder and CEO at Antworks Asheesh Mehra via Information Age / Photo by: Aleksandr Davydov via 123RF

 

Adapt to Existing Legislation

Pichai recommended governments to adapt existing legislation such as the EU’s General Data Protection Regulation (GDPR) rather than create new laws from scratch. Good regulatory frameworks should take into account safety, explainability, fairness, and accountability, helping us develop the right tools the right way. Regulation should also take a “proportionate approach,” weighing potential harms with social opportunities and benefits, Pichai asserted.  

 

AI Will Continue to Take Over

Mehra explained that we will only be seeing AI (and automation) take over the business world as well as other industries if AI is smartly regulated. AI will create a profound impact, but we should also think about a future where AI is used ethically.

Technological advances are inevitable. We should not see new technologies as the harbingers of the apocalypse. Regulation starts now, but it should focus on its effects, balancing them with the benefits of AI to humankind.