By Erik Schluntz
As we look towards the future of Artificial Intelligence (AI) and robotics in the workplace, it’s hard not to imagine a world with robots doing nearly all of the work. An AI-driven workforce is becoming increasingly feasible, with the advancements of a new type of AI called Large Language Models (LLMs). This AI can automatically write text, make customer support chatbots indistinguishable from humans, and even program software itself. The technology currently has the potential to completely reshape the workforce in a way that no one could have predicted.
Five years ago, it was widely assumed that truckers, taxi drivers, and blue-collar workers would be the first to be affected by automation, but instead, the tech industry is quickly capitalizing on the potential of AI in white-collar work. The automation of writing, marketing, and coding is already underway; Companies like Jasper, valued at over $1B, offer automated marketing content production. GitHub’s Copilot is able to assist software developers and even write code directly itself.
So what is a Large Language Model and how does it work? An LLM is trained by reading the entire internet, and asked to predict the very next word it would see while reading. To cover the entire internet like this is a monumental task, and so far only a few companies have had the resources to accomplish it — the most famous of which is OpenAI, owned by Microsoft, and its product “GPT-3.” While practicing this seemingly simple task of “predict the next word,” LLMs miraculously picked up these other skills of coding, answering questions, and even writing poetry. (How else could it predict the next word in a poem, if it can’t write poetry itself?)
But the impact doesn’t just stop at text. Large Language Models can actually be combined with other forms of AI to give them “common sense.” The two most exciting examples of this are AI-generated Images, and Robotics.
AI generated images such as DallE2 and StableDiffusion can produce human level artwork without painstaking detail, and many of their outputs went viral on the internet this year. Under the hood, they use LLMs to understand what the human input means.
Previously in robotics, Robot designers had to program in any particular instruction for the robot to do, but with “common sense” provided by the LLM, it will now be much easier to provide human level instructions, without painstaking details.