Tech
Flexing the Human Hands behind Artificial Intelligence
Moore’s Law is the simple observation that the processing power of computer chips seemed to double every two years. It’s just an assumption, but one that’s held true since 1968.
Much like Moore’s Law, the rate at which technology influences our lives appears to be rapidly accelerating. At the current forefront of this is artificial intelligence (AI). While it may just be the latest entrant among a torrent of disruptive technologies, the impact, scale, and speed of change it is driving is uniquely alarming. It’s thus worth examining what AI, and the nature of technological development, means for our lives and future policymaking.
New technologies have always shaped our lives and our increasingly globalized world, but technology became particularly influential with the first Industrial Revolution around 1784. It represented, for the first time, the substitution of physical labor for machinery. This fundamental shift dramatically re-defined the world. It created and destroyed industries, re-distributed the balance of power between nations, and began the race for continuous economic growth. Between the beginning of human civilization and the 1700’s, the global GDP is estimated to have crawled to a total of $726.91 billion. After a mere 300 years of industrialization, that total has exploded to over $134.08 trillion.
We’ve since experienced two additional revolutions with the rise of electricity and mass production around 1870, and the rise of electronics around 1969. Today’s emerging technologies can be viewed as a continuation of the digital technologies of the third revolution. Although, the total system transformation today’s developments arguably represent a fourth Industrial Revolution; in which AI appears to be leading the charge.
AI is many things. But its most consequential feature is, for the first time, the replacement of cognitive labor for machinery. As with the first revolution, the replacement of physical labor required major market adaptations and new skills. It replaced old jobs with new, higher value-added ones, drastically increased productivity, and created new kinds of jobs and industries. It also drove short-term job losses, social disruption, and the permanent loss of certain jobs and industries. The rising tide lifted most boats, but a few did sink.
There is no shared definition of AI, but it is essentially an algorithm that can interpret data to create novel outputs. “Weak” versions of AI have been a part of our lives for a long time, recommending shows on Netflix or curating your social media feed. Lots of companies have capitalized on the AI hype by rebranding these familiar algorithms as AI, but the real AI-driven disruption is by “strong” AI. Strong AI operates on the same principle but utilizes a drastically more complex learning process through neural networks.
This upgrade enables much greater outputs and learning capabilities and is what enables ChatGPT’s interpretation and conversational abilities or DALL·E’s ability to generate images from text prompts. These new capabilities have been quickly recognized and are being adopted by most organizations, from small businesses to U.S. government agencies. Not only has it significantly increased productivity, but it has also enabled new capabilities. It’s also proven incredibly disruptive, fueling job security concerns, disinformation, and new national security threats.
Technological development often feels inevitable. New tools come out, and it’s a race for workers to learn how to use them and for policymakers to regulate them. It’s an essential loop that keeps us competitive in the marketplace and on the international stage. In fact, it’s often thought that any problem can be addressed through new technologies. This is not to say they aren’t essential to solving problems and creating new opportunities. But it’s deterministic to assume their development is inevitable, and that the direction of change is determined by the technology itself. A hammer can enable someone to construct something new or destroy; but it is the wielder who ultimately decides. As users, developers, investors, and policymakers, we all have the agency to determine the direction of change.
It’s easy to assume that AI is solely driven by talented programmers and keen-eyed investors. What’s often less discussed is the fact that AI is only made possible by training data. This data is scraped from the Internet and contains anything from your selfies to the professional work of writers and artists. Even then, this training data is still only made usable by human labor labeling it. And if we are to mitigate harm and capture the full benefits of AI, prompt engineering will also be an essential skill for future AI-enabled workplaces. In other words, AI is upheld and made possible by the invisible labor of our brilliant humanity. The same is true for every technology, and the development of technology is a path charted by the complex interactions between policy, business, and ultimately, people.
While new technologies may challenge our traditional systems, the fundamental problems, and opportunities they represent are ultimately the same. The impact of AI on cognitive labor closely mirrors the impact of the first Industrial Revolution on physical labor. And like in 1784, we will similarly need to reskill the workforce for this new paradigm.
Such change will require good policy that ensures the direction of technological development benefits everyone. At a time when the speed of change is already blisteringly fast, we are going to need greater funding for research and adaptable policy frameworks that can keep pace. Most importantly, policy must resist the mentality of technological determinism, center humanity as the primary driver and benefactor of change, and include all stakeholders at the decision-making table.