Skip to main content

On the Proliferation of Artificial Intelligence

·8 mins

Q: Do the benefits of artificial intelligence outweigh the risks?

The late physicist Stephen Hawking warned that artificial intelligence (AI) was “either the best or the worst thing ever to happen to humanity”. The technology promises to solve complex problems and unlock scientific mysteries. But it could also imperil jobs, make possible new kinds of weapons—and even develop beyond human control. Are such fears justified? How feasible is it to control the scale and speed of AI’s spread? And do the benefits outweigh the risks?

This essay was originally submitted for The Economist’s OpenFuture Initiative, a British publication. Hence, the UK grammar and use of the UK’s definition of “liberal” (often considered analogous to classic liberalism in the US).

While Artificial Intelligence (AI) promises unprecedented advances in healthcare, transportation, logistics, energy, manufacturing, human resources, marketing, finance, and agriculture, it is not without its risks or costs to society. The benefits that AI promises to deliver will outweigh the costs to society when the public risk posed by this set of emerging technologies is managed to ensure the benefits of AI are organized to capture value by balancing the interests of firms, individuals, and the public at-large.

Widespread public concern over how AI will impact jobs, economic inequality, and decision making has fueled the public’s fear of these technologies in recent years and shed light upon the larger issue of information asymmetry between the public, policy makers, and the AI research community. Concerns of other risks such as weaponized AI, algorithm bias, privacy abuses, or the possibility of an “AI takeover” due to the loss of human control over machines are not wholly unjustified and require the development of a normative consensus for how to best control the scope and scale of these emerging technologies. Without such a normative consensus, AI may become subject to unnecessary politicization (similar to how the environmental movement was in the US over the preceding 40 years) or be used as a means to accomplish illiberal ends, such as limiting the free expression of ideas and opinions.

The public’s paramount concern that AI and automation will result in sudden large scale job displacement is unwarranted. In a 2016 report, the OECD estimated that 9% of the jobs in the US face high automatability as calculated using a job task model, concluding that previously proposed research had sharply overestimated the risk of automation to labour markets. Occupations, which consist of a set of tasks, are unlikely to be automated entirely due to engineering bottlenecks posed by certain job tasks. Rather, certain job tasks within occupations are susceptible to automation, which, when properly automated, may compliment the worker’s tasks. Such augmentation is expected to exert upward pressure on labour productivity and wage growth.

Structural changes to the division of labour are necessary for firms to fully realize the economic benefits of AI augmentation and enable human workers to exploit the engineering bottlenecks of automation by shifting their focus toward tasks that cannot be automated. Profit maximizing firms will make automation decisions by job task which requires the availability of sufficiently advanced technology as well as consideration of the relative factor prices of labour and capital. Due to the protracted adoption of automating the various job tasks within an occupation, labour-saving technology is unlikely to result in sudden large scale job losses as many predict. Some argue that the increases in wages and creation of new occupations related to AI will partially, if not completely, offset the economic losses resulting from automation.

This is not to say that benefits of AI and automation will be shared broadly throughout the workforce. Workers in low skill occupations that consist of highly repetitive tasks and require little education face the greatest risk of automation and therefore disproportionally subject to persistent labour churn. With such workers being concentrated in rural areas, it is critical to balance the considerations of the geographic economic impact of automation with the aggregate economic impact. While the upcoming wave of automation is expected to produce a long run net gain to the economy as a whole, the gains will be concentrated in metropolitan areas with rural areas being left further behind. Without adequate consideration of the impact that AI weighs on non-metropolitan regions, the adoption of AI could exacerbate economic inequality with a transfer of wealth from the working class to the upper class and become subject to otherwise unnecessary politicization; thereby limiting the benefits that AI would bring to society.

As algorithms play a larger role in decision making, building automated systems that reflect human values and principles of equality and justice become of growing importance. Unchecked bias, whether implicit or overt, in the creation of algorithms is one of the greatest threats that artificial intelligence poses to humanity. Algorithm bias may result from implicit factors such as biased data sets used as inputs for machine learning algorithms or more overt factors such as the manipulation of algorithms by a developer for ideological or political motivations. A liberal case can be made for the need to foster greater representation of the presently under represented voices in the AI development workforce — for AI to best serve a diverse population, it must be created by a diverse population. Metrics to identify and correct algorithm bias must be created to provide reasonable assurance that the manifestation of AI does not result in biases along the dimensions of race, ethnicity, gender, religion, socioeconomic class, sexuality, geographic location, political identification, or ideology.

Viewpoint discrimination, resulting from biases in favor or in opposition to certain political identifications or ideologies, is a crucial but understudied variant of algorithm bias. A broadened definition of diversity that is inclusive of more abstract differences such as geographic upbringing, ideology, and economic background is necessary to serve the public good. The disenfranchised poor, often located in rural or agricultural regions, are not adequately represented in the development of AI — which is highly concentrated in coastal metropolitan areas. For AI to be widely accepted by the public, tech companies must shed the public’s perception of being leftist ideological echo chambers — and many big tech firms are currently working to do so. Facebook has recently removed their “Trending News” feature in response to criticism accusing the tech giant’s algorithm of being biased against conservatives. Without diverse participation in the creation of AI that is inclusive of diverse opinions, experiences, backgrounds, and identities, AI proliferation may lead to illiberal consequences and be viewed with skepticism by underrepresented groups thus giving way to its politicization.

Equipping AI with the use of force remains hotly contested — and for good reason. Last year, Elon Musk joined 115 other AI researchers in sending a letter to the United Nations calling for a ban on the use of autonomous weapons. Others argue that autonomous weapons puts human soldiers further from harm’s way, possibly saving lives. Regardless, weaponized AI may pose an existential threat to humanity in the form of an AI weapon arms race, the use of extreme or unnecessary force to achieve a programmed objective, or a loss of human control of such weaponry. Lethal autonomous weapons further pose a challenge in their lack of accountability and therefore may be acceptable only in limited and specific use cases. The international community must quickly reach a normative consensus with respect to the development, use, and definition of lethal autonomous weaponry to control their proliferation.

Fears of malicious actors hacking or tampering autonomous systems are reasonably justified. Information security is a critical component to maintaining data and algorithm integrity, particularly in high risk contexts such as transportation, medical intervention, and weaponry. As the field of AI research and development matures, such risks should be mitigated through the creation and validation of safety standards, certifications, and governance frameworks. Governments and industry associations will need to play a role in the creation and enforcement of application specific standards to ensure instances are built and tested for safety.

In recent years, consumers have become more cognizant of the cost of their digital footprint and therefore more privacy conscious. The availability of big data is a key input for developing and improving AI learning. Machine learning algorithms analyze massive sets of data for classification and predictive modeling which serve as the foundation of AI. To balance the interests of consumers and firms, international standards should be developed for giving consumers notice of what personal data consumers provide to firms and how that data is used. To protect individual privacy rights, anonymized data sets are preferable for machine learning whenever possible.

AI will provide society with the capability to solve complex problems ranging from eliminating urban traffic congestion to finding a cure for cancer. However beneficial AI may be to society, safeguards must be put into place to control and govern AI in order to balance the interests of firms, individuals, and the public at-large. A broadened scope of computer ethics may be necessary to extend beyond how systems are used in practice to encompass ethical considerations of what and how systems are built. Something of a Hippocratic Oath for AI may be desired to provide reasonable assurance that “do no harm” supersedes when in conflict with another objective.

Just as AI can be used for societal good, it’s important to remember that AI can be used for illiberal reasons too — China has recently has made headlines in the West for using AI for domestic surveillance of dissidents and to censor online speech. Harnessing the power of AI should be confined to applications that do not violate individual rights or liberties. With sufficient mitigation of public risk by controlling the scope and scale of AI proliferation, AI can be a net benefit to society and ultimately worthwhile.

Andrew Benson
Author
Andrew Benson