|AI will keep an eye on texting drivers / Photo Credit: Valery Brozhinsky (via Shutterstock)|
Australian state New South Wales announced that they will crack down on drivers using their phones by integrating machine vision into roadside cameras, wrote James Vincent of The Verge, an American technology news network. The AI flags suspects then the humans “confirm what’s going on.” Afterwards, a warning letter will be sent to the offender. Michael Corboy, the assistant police commissioner of New South Wales, said, “It’s a system to change the culture.” Hopefully, the technology would help reduce road fatalities by a third over a period of two years.
This technology also shows the “slow creep” of AI into state and corporate surveillance. According to experts, this trend could lead to chilling civil rights, automating biases and prejudices, and transitioning society into authoritarianism. But the Australian state’s roadside cameras show that identifying people marks the beginning of AI surveillance, in which the real power and threat is identifying actions. These types of features are not widespread, but they continue to permeate into people’s lives.
For example, you can purchase an AI surveillance camera in Japan to automatically spot shoplifters. In the U.S., one company is developing Google for CCTV to allow users to search surveillance footage for “specific types of clothes or cars.” AI experts treat such applications with suspicion. It is said that if AI systems are fed with data that disadvantage certain groups of people or ethnicities, then bias will become more apparent in the software’s results.
But similar problems do not stop private firms and governments from adopting AI. Biased algorithmic systems are already present in the healthcare and criminal justice sector. Perhaps surveillance would pose no difference. However, the aftermath could be a “more tightly-controlled and repressive society.”
Last year, senior policy analyst Jay Stanley told The Verge, “We want people to not just be free, but to feel free.” It means that people do not have to worry about how an unseen, unknown entity may be interpreting or misinterpreting their every move. The concern here is that people will start to monitor their behavior, worrying that every move they make will be misinterpreted and how they may suffer from unneeded consequences.