Should We Fear ChatGPT? Or Their Makers?
Answer: What’s behind the emerald curtain may be a thousand times more harmful than a ChatGPT, Bard or the like.
Recently, there has been a tremendous amount of hype surrounding ChatGPT and other Large Language Model (LLMs)1. The media can’t stop obsessing over them. In turn, we find ourselves swept by the same debates and the waves of panic. And, these debates are good. We should ask questions. We should enforce accountability. More importantly, we should strive to understand the technology and not get left behind. In this post, I will provide my opinion and thoughts on what’s going on.
As someone in the AI field, I am divided. On one hand, I still feel a child’s delight when I actively code or work with Artificial Intelligence (AI) models. On the other hand, I am deeply skeptical of the companies spearheading these practices. AI is but a tool, and the onus should rightly fall on the makers’ of these tools and the world should demand that they follow ethical practices while building their products. After all, who was Frankenstein but the Creator?2
If the past ten to fifteen years of social media and technology has taught me anything, it is that these companies would take the first opportunity to place the blame on the users. We have to stay vigilant and remain aware of the trends and challenges that pour out of Silicon Valley. As someone who has worked with enough nerdy tech bros, I can say one thing for sure, these engineers typically have a limited understanding of the impact or rather they don’t think about the impact of their products beyond the narrow scope of project deadlines and milestones. Picture: the Robotics Engineers from the Netflix Documentary Working: What We Do all Day. (If you haven’t seen it, give it a shot!) I've been surrounded by guys like that, and for sometime, I was one of them. To be clear, I’m not saying that AI scientists build half-baked AI products on purpose. It doesn’t happen by intent, of course, but rather by the very profit-driven nature of the current tech ecosystem.
Leading AI researchers (all men by the way) whom we used to look up to during grad school and whose papers we religiously followed and re-implemented are now sounding alarm bells. Geoffrey Hinton (the “GodFather” of AI), Sebastian Thrun (Robotics and AV Expert) and the very CEO of OpenAI3. But if you look at the companies they worked for/ are working for, they have not paused in the AI arms race. It appears to me that these men are merely covering their arses. You know the saying, “Never meet your heroes”? I’ve never met them but their actions have been deeply disappointing nonetheless.
When OpenAI published its Democratizing AI challenge I was excited. I thought, “Good. They are worried about it and are invested in fixing their product.”
BUT a closer look opened up a few more wormholes than before:
Why is the challenge open for less than a month? If its a world-altering and life-changing problem, shouldn’t one take more time?
Why is the prize money a mere 1 million USD in total when OpenAI is worth nearly THIRTY billion?
Why aren’t more of their own highly paid employees actively working on addressing the issues? Also, how could they not have thought about it while building them? I know how intensely AI programmers discuss their data and the capabilities they want to add. How could they miss basic giant red flags? Or were they ignored on purpose?
And Geoffrey Hinton, leaving Google? I wonder if he left his massive equity package behind.
So, Is AI The New Horseman Of The Apocalypse?
In short: I think we are still a long way off from AI world domination. Some of the fears are a bit overhyped. But that doesn’t mean that we shouldn’t be careful. I have been experimenting and playing around with ChatGPT and Vertex AI (Google’s LLM) for the past month and I have found that they are not as capable as the Media believes. At least not without intentional programming by the user. Only if the CEO of your company intentionally orders for a prompt that can do your job, then the bots can replace you. We should not forget that the bots themselves are not out to take your jobs or rob a bank or spread malicious photos of women online - It’s the humans behind them.
I believe that accountability should rightly lie with the creators and users of AI, and not only with the AI models themselves. That’s a mere scare tactic and a way to distract people from the real problem. Not unlike how pervasive social media has become despite its harmful effects on society and the human mind. We are told to blame ourselves for the 5 hour screen time or feel weak for falling for the latest ad on instagram. Or, we blame the app itself - Damn that TikTok, it is so addictive. BUT what most people don’t realize is that it’s not your fault. The problem is systemic, it's the people who built algorithms which target the human psyche, it's the features like “infinite scroll” that play on human weaknesses4. It’s everywhere.
And I’m afraid of the same thing happening all over again. To be clear, I’m not just afraid, I am 90% sure that it will. The very companies that push infinite ads on you are the same ones at the head of the AI race now. The CEOs shirking responsibility and creating backup homes in Mars and remote islands will escape any forthcoming fallout. Will we?
For every 1000 companies that work on AI and use it to serve bottom lines, there is one AI company that promotes ethical practices, encourages policy making, and wishes to genuinely use AI to benefit society. We should all champion for more such companies. They may be a weak shield, but a shield nonetheless.
How Can We Use AI To Our Advantage?
The first step to getting over the fear is to familiarize oneself with the product. If you haven’t yet, I would suggest that you play around with the free ChatGPT tool or similar products. And you may be surprised to find some uses for it too!
As a non-native english speaker who still hasn’t got a lid on corporate lingo, I have used ChatGPT to help edit my resume and cover letter. When recruiters can hide behind ATS and other systems which use AI, why shouldn’t we? I felt that this was an opportunity to level the playing field. Knowing what the ATS wants, beyond just action verbs was very helpful. But of course, you should take any suggestions from ChatGPT with a grain of salt.
TIP: The usefulness of these bots is determined by how good your prompt is.
For example, you can write a sentence in your own words and ask ChatGPT to fine tune it, saving you both time and effort. If you can’t afford a career coach or pay a resume writing assistant, ChatGPT can help you. It helped me.
This is how I used it to edit my Objective Statement:
If you noticed I didn’t ask it to give a Summary statement on its own. Rather, I gave a sentence that I had written which didn’t flow as well as I wanted it to and asked for help fixing it. That’s all.
On LinkedIn, I’ve seen dime-a-dozen outraged recruiters. They hate resumes or cover letters that were created with AI. But that is unfair, when they continue using AI for (biased) screening. I realize that if a recruiter reads this post they will probably reject me at once. But that’s okay, I wouldn’t have fit in with their values anyway. There is always another recruiter who would agree that job searches are stressful and difficult enough as it is, and that it's okay to get help where we can. Hopefully.
This outrage also borders on the comical as there currently exist many online resume builders and other websites which use ML and AI. Why should one that runs on a Foundation Model be feared so? And it’s not just for resume writing, in almost every sphere of our lives AI is already present and pervasive. From Netflix to Robinhood, we have been fed products that run on AI. Why the fear now? Why the backlash now?
That’s something we should all think about and be apprehensive over. ChatGPT is not the first of its kind. DocuSign, Alexa, Google’s text completion, PDFScanners, they are all built from variations of the same basic math and probabilistic models5. Maybe the AI leaders are more afraid now because none of these older technologies could write software, none of these could create its own data, and none of these could make their own jobs redundant. Now, maybe they understand that Frankenstein could turn on its creator.
After all, Karma is a boomerang.
Hinton's exit, Thrun's opinion, and Sam Altman's fear.
Book suggestion: Stolen Focus by Johann Hari.
https://builtin.com/data-science/beginners-guide-language-models
Poetry!!! Loved reading this