The term online chatbot had a different meaning before November 30, 2022, often bringing to mind Microsoft’s Cortana, Amazon’s Alexa, or Apple’s Siri — personal virtual assistants that used natural language to respond to basic search inquiries. Users were impressed with the ability to schedule calendar appointments, make calls, and quickly perform basic Bing and Google searches.
That all changed one year ago, when OpenAI launched ChatGPT, a conversational chatbot at the forefront of generative AI. Since its launch one year ago, ChatGPT has seen explosive popularity, as its ability to write complex code, compose music, generate business proposals, play games, and answer complex questions has redefined the capabilities of GenAI systems.
Columbia Business School spoke with Professor Oded Netzer, the School’s vice dean for research and the Arthur J. Samberg Professor of Business, as well as Professor Olivier Toubia, the Glaubinger Professor of Business, as they reflected on the one-year anniversary of ChatGPT and why they believe the world may never be the same.
Here’s what they told us:
CBS: What do you believe is the most surprising capability of ChaptGPT? Why?
Oded Netzer: The key to ChatGPT and other generative AI models’ success is their ability to interact with humans at a human-like level. I believe that this was the most surprising effect that led to ChatGPT’s success at the end of 2022. Up to that point, we interacted with tools like Siri, Alexa, or Google Assistant, which were (and to a large extent still are) subpar in terms of their ability to converse with humans at a human level.
To me, one of the most surprising capabilities of ChatGPT is its ability to take text that has a lot of noise in it, like text from voice-to-text software, which makes a lot of mistakes, or text from chats between people that has a lot of back and forth and stops like um... and hmm… and summarize it in a very succinct way, even though the original document would be nearly incomprehensible to humans. I find it surprising because it is one of these tasks where ChatGPT not only achieves a human level, but it performs at a super better human level.
Olivier Toubia: I’ve been most surprised by GPT’s ability to generate ideas. I used to believe that machines would not be able to display creativity because they would lack the intuition necessary to determine which concepts to combine to form a new idea and how to combine them. Obviously, I was wrong.
CBS: Many speculate that ChatGPT will make professions in fields like customer service, entertainment, finance, and programming obsolete. Do you agree with these predictions? What professions do you think are most likely to change with the introduction of ChatGPT?
Netzer: I don’t think that generative AI will make many jobs obsolete. Instead, it is likely to improve a lot of jobs, taking the mundane aspects of the job away and freeing people to deal with more complex and interesting aspects of the job, hence increasing job satisfaction. Take programmers as an example. Generative AI has already had a meaningful impact on the work of programmers. Tools like GitHub Copilot allow programmers to program much more efficiently, particularly replacing the mundane tasks of writing simple procurers or debugging codes, allowing programmers to focus on problem-solving and how to structure the code to address the problem. This leads to more efficient coding and higher job satisfaction, as most programmers do not enjoy debugging their code, like most of us don’t enjoy copy editing our emails or memos.
We are likely to see similar effects in professions like customer service. Generative AI will answer, or provide the script for humans to answer, the mundane customer questions, leaving human service agents to deal with complex issues that require a human touch. It will also lead to upskilling the customer service workforce, as they will deal with more complex and interesting tasks. Some professions like simultaneous translation are likely to be replaced once AI reaches sufficient accuracy, leaving humans to maybe only VIP-level translation tasks.
Toubia: I don’t know if obsolete is the best term, but industries that rely on the creation of text are obviously being disrupted. There is an ongoing debate as to whether large language models (LLMs) will replace humans in these industries or augment their productivity and open new opportunities. If we look at the Industrial Revolution as a historical example, many jobs and tasks in manufacturing and agriculture have been automated, but humans still perform some. The same might happen with the generative AI revolution. When it comes to industries that relate to culture, entertainment, and creativity, I think humans have a bias in favor of content generated and delivered by other humans vs. machines. Therefore, just like we pay a premium for handcrafted products, I suspect we will be willing to pay a premium for human-crafted creative content. This means LLMs might replace humans in creating mass-produced content, but there will remain a premium segment of human-crafted content.
CBS: The government has struggled to regulate ChatGPT and other generative AI tools at the pace with which they are growing. Do you think there is a pathway for the federal government and technology companies to work together to implement common-sense regulations without stunting AI’s technological growth?
Netzer: The issue of regulation is complex, but we cannot repeat the mistakes we have made with social media where we put the regulation in the hands of the companies. I am happy to see that generative AI regulation is being discussed already in the early days of the technology. Indeed, the way to do so is for companies and regulators to work together and put the boundaries around the technology. I don’t believe slowing down the progress of technology is possible or is the right answer. One important aspect for regulators to enforce is to ensure that we create watermark-type solutions such that consumers know whether the content they consume was generated by humans or by AI.
Toubia: I am pessimistic about the ability of the US government to regulate generative AI. Like in other domains, other parts of the world, including Europe and Asia, might set standards that may be adopted later in the United States. We need to understand that generative AI can influence and create culture, moral values, ideas, and ideologies, which is why regulations are important.
CBS: How do you see ChatGPT’s role as a tool for the classroom?
Netzer: We should think about generative AI as a friend rather than a foe in the classroom, at least at the higher education levels. One could draw a comparison from the introduction of calculators. At the elementary school level, we want to make sure that kids learn the basics of math operations and learn to do some basic math in their heads. Once they get to high school, we let them use calculators for more complex math problems, since they will be able to use calculators later on in life when dealing with complex math problems. Similarly, when we write math exams in high school or college, we make sure that simply having a calculator is not enough to be able to answer the questions. Similar logic should be used for generative AI.
In primary school, we want to teach kids to learn writing skills, so not allowing children to use generative AI for writing makes sense. However, at high school and higher education levels, our role is to prepare students for a world where generative AI is an integral part of their lives. We have to integrate such tools into our curriculum. If generative AI can receive a decent grade in our higher education classes, we have failed as educators. As educators, our objective should be to teach students things that machines cannot do, like judgment, morality, and synthesis of information, while using machines for lower level tasks (as we do with calculators in math) rather than teaching skills that machines can already do.
Toubia: Together with [CBS Assistant Professor] Malek Ben Sliman, we have developed a new course, Generative AI for Business.
CBS: Can you give us one prediction for ChatGPT in 2024?
Netzer: In one word, embedding. I expect that in 2024, we will start seeing generative AI, whether it is ChatGPT or other tools, embedded in other productivity tools. Rather than going to the ChatGPT or the Google Bard app or website, we will see generative AI embedded into the productivity applications we already use, like email software, spreadsheets, presentation programs, or word processors. Rather than using menus to generate a slide or a table, we will speak or type in plain English instructions, like ‘Create a graph of sales by month,’ ‘Make the Y axis capped at 1,000,’ ‘Make the bars wider,’ ‘Add a label to the X axis that says Sales between 2021-2023,’ etc. Personally, I look forward to that reality.
Toubia: ChatGPT will become a bit boring in 2024. It will hallucinate less, become better behaved, and look more and more like a search engine. It is growing up from a child to a teenager, which is good in many ways, but it might lose some of its charm.
The Bernstein Center for Leadership and Ethics recently hosted the 2023 Klion Forum featuring a panel on the ethical concerns of generative AI and the risks and challenges today’s business leaders face because of the innovations of machine learning. Watch the discussion, which was moderated by CBS Professor Daniel Guetta, here: