How to Build A More Ethical AI For the Betterment of Humanity
Sat, April 10, 2021

How to Build A More Ethical AI For the Betterment of Humanity

AI is capable of analyzing large amounts of data in the blink of an eye and turn them into actionable data. If humans were to do that, it would take a lot of time to process that data / Photo by: Tatiana Shepeleva via 123RF

 

Founder and CEO R.J. Taylor of Pattern89, a marketing R&D platform for social media ads, said that an app called ImageNet Roulette showed how it can classify people with AI, as reported by the editorial team of Inside Big Data, a platform dedicated to publishing content related to machine learning and big data. The app was featured in an art exhibit on the history of image recognition systems. Apparently, it revealed the bias and flaws associated with categorizing people’s faces by machine—it was racist.

 AI is capable of analyzing large amounts of data in the blink of an eye and turn them into actionable data. If humans were to do that, it would take a lot of time to process that data. Despite AI’s benefits to various industries, we must remember to build a more ethical AI, and as much as possible, devoid of human bias.

 

Statistics on AI

Global market research firm Allied Market Research said the growth rate of the AI market will grow at a CAGR of 55.6% from 2018 to 2025 and is forecasted to reach $169,411.8 million in 2025, up from $4,065.0 million in 2016, as cited by Jenny Chang of Finances Online, a B2B and SaaS online review site. The Asia Pacific region will hold the largest AI market share and manufacturing is expected to experience the highest AI use growth.

 In a survey done by MemSQL, a provider of the fastest real-time data warehouse, it was found that 61% of more than 1,600 respondents noted that ML (machine learning) and AI are their organizations’ “most significant data initiative” for the year 2019, wrote GlobeNewsWire, a press release distribution service. This was followed by big data and business analytics initiatives at 58%.

Global market research firm Allied Market Research said the growth rate of the AI market will grow at a CAGR of 55.6% from 2018 to 2025 and is forecasted to reach $169,411.8 million in 2025, up from $4,065.0 million in 2016 / Photo by: wklzzz via 123RF

 

The survey also found that 88% of respondents said their company already has, or has plans, to employ AI and ML technologies within their organization. Additionally, 95% of those who have plans to implement AI/ML said it would either complement or make their jobs easier than reduce or make their role much more difficult while 74% of all respondents perceived ML and AI to be a game-changer as they have the potential to revolutionize jobs and industries.

 On the other hand, audit and assurance, consulting, and tax services provider PwC stated that 76% of CEOs expressed concern with AI adoption’s lack of transparency and potential for bias. Some 77% cited vulnerability and disruption to their businesses could amplify with the implementation of AI and automation.

 The abovementioned statistics showed that integrating AI technologies into businesses can improve their standing in their respective industries. AI solutions can give enterprises a competitive advantage, allowing them to streamline core corporate functions.

A Guide on Building a More Ethical AI

1. Establish a Clear Definition of “AI Ethics”

Chief Privacy Officer Barbara Cosgrove of Workday, an on‑demand financial management and human capital management software, vendor wrote an article for non-profit organization World Economic Forum where she said that the definition of “AI ethics” needs to be specific and actionable for all involved stakeholders in the enterprise. For example, Cosgrove’s company defines it as putting people first, caring about society, acting fairly, and respecting the law.

 It can feel abstract for engineers and developers to integrate “ethics” into AI systems. However, discussing the ethical use of AI can raise the bar for the whole industry. This can also allow organizations to break out of their silos and share best practices.

 2. Create Guidelines

 Companies can use established guidelines as a starting point, incorporating them into their own ethical principles to reflect their team and core values. This means working closely with a team that represents a country in terms of age, gender, and ethnicity. 

For instance, a team in the US will most likely comprise of 50% female and 27% people of color. Committing to diversity and ethics enable the humans behind the AI to ask the right questions and formulate the best possible solutions.

 Companies can use established guidelines as a starting point, incorporating them into their own ethical principles to reflect their team and core values. This means working closely with a team that represents a country in terms of age, gender, and ethnicity / Photo by: rawpixel via 123RF

 

 3. Collaborate and Empower Employees

 A cross-functional group of experts can guide all decisions regarding the design, development, and deployment of responsible ML and AI. Like Workday, firms can train staff on AI ethics by providing modules, toolkits, seminars, employee onboarding, and workshops. By forming a more diverse group of experts equipped with unique skillsets and views, firms can discuss the future and current uses of ML and AI into their products or workflow.

 Alternatively, organizations can also empower employees by having open conversations on AI. These conversations can be about the implications of AI, how it functions, and how it complements their job.  

4. Achieve Transparency and Explainability

 Maribel Lopez of business news website Forbes explained that transparency is a key requirement for building trust and adopting AI-powered solutions. People are already harboring distrust with AI as many organizations prioritize complexity rather than trust (and explainability) in developing AI models. If they are not transparent about AI, that’s one big red flag. Are they using unethical algorithms? Are their AI systems not true AI but machine learning with human inputs?  

 Further, regulatory compliance and model security mandate organizations to “design a certain level of interpretability” into their AI models. Companies also need to ensure that their systems are not learning and reinforcing unconscious biases. This may prompt enterprises to augment existing data to accommodate a more representative sample to consider changes in laws, norms, and language.

 While it’s true that complex AI algorithms can draw insights from data that were previously unattainable, business users (and even data scientists) may not understand the logic behind the systems’ decision. One way to make a model more transparent is to adopt from a specific, explainable family of models. Some examples include linear models, rules sets, decision sets, decision trees, generalized additive models, and case-based reasoning methods.

The Year of AI Governance

 As more industries adopt more sophisticated AI models with deep learning, enterprises need to be equipped with a set of tools to highlight issues. After all, companies are responsible for implementing AI ethically. Additionally, we can expect more organizations to employ tools for AI governance to mitigate potential regulatory risks.

Overall, advances in technology enable us to reap the benefits of AI and other solutions. However, ethics should be prioritized over complexity or profit. Companies should collaborate with experts to minimize bias. They are also obligated to share how AI-enabled technologies work and how AI can complement their employees’ jobs. AI is indeed a powerful technology, but only if we build or use it responsibly and ethically.