December 11, 2018

Congress takes first steps toward regulating artificial intelligence

Congress takes first steps toward regulating artificial intelligence
How well will artificial intelligence balance the human concept of fairness? Phonlamai Photo/Shutterstock.com

Some of the best known examples of artificial intelligence are Siri and Alexa, which listen to human speech, recognize words, perform searches and translate the text results back into speech. But these and other AI technologies raise important issues like personal privacy rights and whether machines can ever make fair decisions. As Congress considers whether to make laws governing how AI systems function in society, a congressional committee has highlighted concerns around the types of AI algorithms that perform specific – if complex – tasks.

Often called “narrow AI,” these devices’ capabilities are distinct from the still-hypothetical general AI machines, whose behavior would be virtually indistinguishable from human activity – more like the “Star Wars” robots R2-D2, BB-8 and C-3PO. Other examples of narrow AI include AlphaGo, a computer program that recently beat a human at the game of Go, and a medical device called OsteoDetect, which uses AI to help doctors identify wrist fractures.

As a teacher and adviser of students researching the regulation of emerging technologies, I view the congressional report as a positive sign of how U.S. policymakers are approaching the unique challenges posed by AI technologies. Before attempting to craft regulations, officials and the public alike need to better understand AI’s effects on individuals and society in general.

Concerns raised by AI technology

Based on information gathered in a series of hearings on AI held throughout 2018, the report highlights the fact that the U.S. is not a world leader in AI development. This has happened as part of a broader trend. Funding for scientific research has decreased since the early 2000s. In contrast, countries like China and Russia have boosted their spending on developing AI technologies.



Drones can monitor activities in public and on private land.
AP Photo/Keith Srakocic

As illustrated by the recent concerns surrounding Russia’s interference in U.S. and European elections, the development of ever more complex technologies raises concerns about the security and privacy of U.S. citizens. AI systems can now be used to access personal information, make surveillance systems more efficient and fly drones. Overall, this gives companies and governments new and more comprehensive tools to monitor and potentially spy on users.

Even though AI development is in its early stages, algorithms can already be easily used to mislead readers, social media users or even the public in general. For instance, algorithms have been programmed to target specific messages to receptive audiences or generate deepfakes, videos that can appear to present a person, even a politician, saying or doing something they never actually did.

Of course, like many other technologies, the same AI program can be used for both beneficial and malicious purposes. For instance, LipNet, an AI lip-reading program created at the University of Oxford, has a 93.4 percent accuracy rate. That’s far beyond the best human lip-readers, who have an accuracy rate between 20 and 60 percent. This is great news for people with hearing and speech impairments. At the same time, the program could also be used for broad surveillance purposes, or even to monitor specific individuals.

AI technology can be biased, just like humans

Some uses for AI may be less obvious, even to the people using the technology. Lately, people have become aware of biases in the data that powers AI programs. This has the potential to clash with generalized perceptions that a computer will impartially use data to make objective decisions. In reality, human-built algorithms will use imperfect data to make decisions that reflect human bias. Most crucially, the computer decision may be presented as, or even believed to be, fairer than a decision made by a human – when in fact the opposite may be true.

For instance, some courts use a program called COMPAS to decide whether to release criminal defendants on bail. However, there is evidence that the program is discriminating against black defendants, incorrectly rating them as more likely to commit future crimes than white defendants. Predictive technologies like this are becoming increasingly widespread. Banks use them to determine who gets a loan. Computer analysis of police data purports to predict where criminal activity will occur. In many cases, these programs only reinforce existing bias instead of eliminating it.

What’s next?

As policymakers begin to address the significant potential – for good and ill – of artificial intelligence, they’ll have to be careful to avoid stifling innovation. In my view, the congressional report is taking the right steps in this regard. It calls for more investment in AI and for funding to be available to more agencies, from NASA to the National Institutes of Health. It also cautions legislators against stepping in too soon, creating too many regulatory hurdles for technologies that are still developing.

More importantly, though, I believe people should begin looking beyond the metrics suggesting that AI programs are functional, time-saving and powerful. The public should start broader conversations about how to eliminate or lessen data bias as the technology moves on. If nothing else, adopters of algorithmic technology need to be made aware of the pitfalls of AI. Technologists may be unable to develop algorithms that are fair in measurable ways, but people can become savvier about how they work, what they’re good at – and what they’re not.

The Conversation

Ana Santos Rutschman does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

theconversation.com

Related posts