|AI Now 2019 Report recommends using facial recognition in sensitive social and political contexts to prevent AI bias / Credits: Tero Vesalainen via Shutterstock|
Since 2016, the AI Now Institute at New York University has released an annual report about artificial intelligence. The AI Now Report discusses the social impact that AI has on humans, communities, and the population at large. It gets information and analysis from experts in several industries worldwide. Aside from that, the team works closely with partners in different sectors such as IT, legal, and civil rights.
According to The Next Web, a website and annual series of conferences focused on new technology and start-up companies in Europe, the growth of facial recognition and other risky AI technologies has barely slowed down despite growing public concern and regulatory action. Several “smart city projects” have even put in charge “civic life” in managing critical resources and information. For instance, Google’s Sidewalk Labs project promoted the creation of a Google-managed citizen credit score as part of its plan for public-private partnerships like Sidewalk Toronto.
“And Amazon heavily marketed its Ring, an AI-enabled home-surveillance video camera. The company partnered with over 700 police departments, using police as salespeople to convince residents to buy the system. In exchange, law enforcement was granted easier access to Ring surveillance footage,” the report added.
The report also highlighted the existing AI bias. One of the report’s recommendations is for governments and businesses to stop using facial recognition in sensitive social and political contexts. This should stay until the risks of the technology are fully studied and adequate regulations are implemented. Also, the report stated that the AI industry should make significant structural changes to address systemic racism, misogyny, and lack of diversity.
Additionally, to address AI bias, the report suggested that lawmakers regulate the integration of public and private surveillance infrastructures. Studies involving AI bias should address not only technical issues but also the broader politics and consequences of AI’s use. The report also warns that patching systems or tweaking algorithms will not solve biased AI, discriminatory facial recognition systems, and AI-powered surveillance.