According to experts: AI need regulation and public accountability

According to experts: AI need regulation and public accountability

Last month, the AI Now Institute at New York University, which researches the social impact of AI, urged public agencies responsible for criminal justice, healthcare, welfare, and education, to ban black box AIs because their decisions cannot be explained.

Technological advancements are creating a sea change in today’s regulatory environment, posing significant challenges for regulators who strive to maintain a balance between fostering innovation, protecting consumers, and addressing the potential unintended consequences of disruption.

Technologies such as Artificial Intelligence (AI), Machine Learning(ML), Distributed Ledger Technology, Big Data analytics,  and the Internet of Things (IoT) are creating new ways for consumers to interact. It’s an era in which machines teach themselves to learn. Autonomous vehicles communicate with one other and smart devices respond to and anticipate consumer needs.

But most recently, researchers have documented a long list of AIs that make bad decisions either because of coding mistakes or biases ingrained in the data they trained on.

Bad AIs have flagged the innocent as terrorists, sent sick patients home from the hospital, lost people their jobs and car licenses, had people kicked off the electoral register and chased the wrong men for child support bills. They have discriminated on the basis of names, addresses, gender, and skin color.

Most AIs are made by private companies who do not let outsiders see how they work. Moreover, many AIs employ such complex neural networks that even their designers cannot explain how they arrive at answers. The decisions are delivered from a “black box” and must essentially be taken on trust.

Because of that, accountable and transparent is now one of the most crucial areas of AI research. The stakes are higher if the AI is driving a car, diagnosing illness, or holding sway over a person’s job or prison sentence.

According to the new report from researchers at Google, Microsoft and others at AI Now, Artificial intelligence systems and creators need direct intervention by governments and human rights watchdogs to control the disruptive functions of AI systems.

The AI Now Institute at New York University is an interdisciplinary research institute dedicated to understanding the social implications of AI technologies. It is the first university research center focused specifically on AI’s social significance. Founded and led by Kate Crawford and Meredith Whittaker, AI Now is one of the few women-led AI institutes in the world.

On December 2018, in the 40-page report (PDF), the New York University-based organization, with Microsoft Research and Google-associated members, stated that AI-based tools and systems are being put to work in places where they can deeply affect thousands or millions of people. The researchers write in the paper, “As the pervasiveness, complexity, and scale of these systems grow, the lack of meaningful accountability and oversight – including basic safeguards of responsibility, liability, and due process – is an increasingly urgent concern.”

"Regulation is desperately needed. But a “national AI safety body” or something like that is impractical. Instead, AI experts within industries like health or transportation should be looking at modernizing domain-specific rules to include provisions limiting and defining the role of machine learning tools. We don’t need a Department of AI, but the FAA should be ready to assess the legality of, say, a machine learning-assisted air traffic control system."

"Public accountability and documentation need to be the rule, including a system’s internal operations, from data sets to decision-making processes."

“Facial recognition, in particular, questionable applications of it like emotion and criminality detection, need to be closely examined and subjected to the kind of restrictions as are false advertising and fraudulent medicine.”

These are necessary not just for basic auditing and justification for using a given system, but for legal purposes should such a decision be challenged by a person that system has classified or affected. Companies need to swallow their pride and document these things even if they’d rather keep them as trade secrets - which seems to be the biggest ask in the report.

Google, for instance, recently made a big deal about setting some “AI principles” after that uproar about its work for the Defense Department. It said its AI tools would be socially beneficial, accountable and won’t contravene widely accepted principles human rights.

Luciano Floridi at the Oxford Internet Institute said, “We need transparency as far as it is achievable, but above all, we need to have a mechanism to redress whatever goes wrong, some kind of ombudsman; and it’s only the government that can do that.”

News Source

Recommended for you