There is a humane argument for universal income, one I believe in, that if people lose their jobs, we have to support them.
But there is an economic-justice argument too, given the job loss economists say ChatGPT and similar programs will cause. Artificial intelligence applications like ChatGPT take information from the web. They treat the collective, creative energy of anybody who ever posted anything on the web as if it were their own.
Earlier attempts at artificial intelligence were like building a better mousetrap, which is not the way to look at ChatGPT. When a computer beat a chess champion back in the day, it was strictly a function of the code. A programmer coded chess rules into the computer, the computer figured out the next right move, and so on.
ChatGPT is more than code. It is data from the web. When a tired house-husband types into an AI engine, “Things to do in Phoenix,” and gets a detailed response, the hard information came from somewhere. ChatGPT can’t figure out the names of parks, zoos, restaurants, and hotels in Phoeni. ChatGPT can’t make value judgments or form opinions about Phoenix attractions. It gets that information from webpages, travel blogs, chamber of commerce pages, and reviews.
The term AI people use for the information they feed into the computer is training data. That’s the right term, but ChatGPT and similar applications stretch the meaning of training data to the point of euphemism. They are not so much using small subsets of data to teach the computer to make predictions, the usual meaning of training data, as creating a gigantic data base by Hoovering up and storing every webpage they can find.
If there’s an alternative explanation for how AI applications know all the facts, I’d like to hear it. Without access to the web, sequestered in a bunker somewhere, without ever having visited Phoenix, I’d like to see AI programmers write an application that returns to me things to do in Phoenix. See, they can’t. The information comes from webpages that in nearly all other cases users have to cite as sources.
This is the case for universal income. The success of AI, and resulting job loss, will happen because of content produced by all of us.
Most of the web already works like that. Amazon reviews, positive and negative, are valuable. They draw people to the site. Some reviewers, working with vendors, have figured out how to monetize reviews. Amazon doesn’t like that, but they benefit from it. Anybody who ever wrote a review on Amazon produced content that helped the site succeed.
Most of us don’t get paid for our reviews. Every time Amazon asks me to write a new review, I feel like they’re ask me to work for free. That might sound like an overreaction, but ask a plumber to fix your pipes when she’s over for dinner and see what answer you get. I write reviews sometimes. I want to fit in. I want give back. But Amazon cashes in.
It’s not a small thing. If everybody who ever wrote a review for Amazon deleted their content tomorrow, Amazon would panic. I’m not advocating people do that. I’m just pointing out reviews have more value than most people think. If Amazon wanted to repopulate their site with reviews, they’d have to pay armies of copywriters. Those copywriters would have to duplicate the expertise of the lost review writers, who in many cases had specialized knowledge of fields like electronics. The copywriters would also have to put in time with the products. The cost would be gigantic.
And so, the economy has changed. In an interconnected world, where information has value, anybody who is typing on the Internet is working.
This content production is more than ChatGPT and Amazon. It’s comments on Yahoo, Yelp, and a million other sites. It’s social media. Every bit of content posted on the Internet makes money for somebody, though often the beneficiary is not the content producer.
And while it might be true that most people who write reviews on Amazon don’t expect compensation, ChatGPT and similar applications short-circuit visits to webpages that do expect compensation.
We’re moving towards a world where ChatGPT and similar gatekeepers monetize access to information, and the people who produce information receive nothing. The change is happening almost without warning.
The better-mousetrap part of ChatGPT is language. ChatGPT sounds realistic. But without facts cribbed from the web, the language skills of ChatGPT would be meaningless. We’d get mad-libs back for responses.
There are reports of ChatGPT college-essay-style answers with fake sources. The most likely explanation is the fake sources are placeholders for real sources. ChatGPT may be holding back real sources to avoid triggering intellectual property concerns. If they do publish real sources someday, a for-citation model of compensation could be worked out with content authors. But they could still hide some sources.
In a way, ChatGPT is a slick version of the high school student who changes a few words in plagiarized content and pretends it’s his work.
If ChatGPT used the Encyclopedia Britannica alone for training material, the program could manipulate and fuse information so that no one entry could be identified as the principal source for a given response. But all the hard information came from the encyclopedia. That was the only source.
This is what ChatGPT is doing with information on the Internet. The human-sounding voice bewitches us. We think the programmers coded an application that can think for itself. But the information is ours.
Universal income might not be the best solution to AI-caused job loss, but it is a solution. The justification for universal income is more than kindness and humanity, though kindness and humanity ought to be enough. The justification is we all produced the content.