With advancements in technology, industrial products have evolved. Manufacturing has changed, and jobs must adapt. GE is actively investing in developing the workforce of the future, but greater collaboration is needed among business, government and academia. GE Reports is excited to welcome experts to analyse the impact of technology on the future of work. Here, Matthew Taylor, AI expert and professor at Washington State University’s School of Electrical Engineering and Computer Science, answers our questions.
Will advances in AI make society better off or worse off?
Overall, AI advances will improve our society. Because of AI’s broad reach, there will be changes in many different areas, some of which we cannot predict. Any time there are large technological changes, there will be “winners” and “losers.” AI can raise the quality of life for us on average, but we should also preemptively come up with solutions to help those who are negatively affected.
Will AI create more jobs than it eliminates?
In the short-term, it’s not clear, as much of today’s AI focuses on automation. In the long-term, yes. AI will create not only new jobs but entire new industries that will require humans tightly integrated into the system for overall success.
What jobs are most at risk to being replaced by AI?
AI will first target jobs that are repetitive and require little outside knowledge. But we are still a long way from leveraging “common sense knowledge” that humans naturally have in machines.
Advancements in robotics will not only target such repetitive jobs but also those that are dangerous or unappealing. For example, my lab’s work on precision agriculture at Washington State University focuses on automating jobs that fruit growers have trouble hiring people for, even when paying over $20 per hour.
In the short term, low-skill jobs are most vulnerable. In the longer term, even high skill jobs can be automated. However, rather than thinking of jobs being “at risk,” it may be more useful to think about how AI can supplement existing workers rather than replacing them. For example, when doing background research for a lawsuit, an AI program may be able to quickly generate lists of relevant decisions and patents, but a human with a law background will be a critical component to sort and use this information when deciding on appropriate actions.
What actions should be taken to counter potential negative consequences of AI, such as labour displacement?
As the economy shifts to using more AI and robotics, more and more jobs will require workers to understand computers and computational thinking. If the U.S. wants to remain competitive, we must find better ways to encourage and support the development of STEM (Science, Technology, Engineering, and Math) skills, even for students who are interested in careers that have been traditionally unrelated to STEM. Computational thinking is becoming increasingly important at both the college and K-12 levels – although the number of students taking advanced placement tests in most STEM fields has been increasing, the number of students taking computer science has remained relatively flat.
Do technologists and engineers have a duty to think about the societal impact of AI, such as labour displacement?
No. For example, overuse of antibiotics threatens to create superbugs that are resistant or immune to current antibiotics. But researchers who created antibiotics were rightly focused on solving current problems rather than the long-term implications.
These problems are critical for our society to discuss but many engineers do not have the ethical, economic or legal training necessary to make well-reasoned decisions. While some engineers must be involved to keep such discussions grounded, many engineers will want to (and should) keep focused on their shorter-term goals. As we better understand what is possible, we can decide how to best shape society to benefit as many as possible.
What would you say to those who fear the impact of AI?
I’ve seen people who fear AI fall into two broad camps. The first is concerned about how society will change, how jobs will be affected, etc. Such concerns are reasonable and can be discussed in the context of other historical events like the Industrial Revolution and the microcomputer revolution.
The second is worried about killer robots taking over. To paraphrase Dr. Andrew Ng, worrying about this class of problems is similar to worrying about overpopulation on Mars: overpopulation could be a problem in the future, but right now we’re so far from successfully even getting humans to reach Mars, it’s premature to worry about. This class of problems could be a legitimate concern in the future, but from a technological standpoint, it’s too nebulous a problem to make significant progress at this point. In the meantime, worrying about problems that may occur in the distant future distracts us from actual problems today.
(Top image: Sensors measure force and pressure during apple hand picking. Credit: Long He, Washington State University.)
This article originally appeared on the US edition of GE Reports. All views expressed are those of the author.