
by Sebastian Nesci (HERH)
It’s certainly true that innovation has always outpaced legislation. When it comes to artificial intelligence, it wasn’t very long ago that the technology was extremely basic. Only recently have we advanced our technological capabilities to the point where we need to be wary of the repercussions.
I sincerely believe that while there are many benefits to using AI tools, we can’t ignore the consequences. To start, I’d like to talk a little bit about ChatGPT. Many of you may be familiar with it – it was launched on November 30 th , 2022, and has greatly impacted the world since.
ChatGPT was created by OpenAI, and was originally powered by their GPT-3 model. Over time, it has evolved to use GPT-3.5-turbo, and now they offer GPT-4. Each new generation of AI model provides a more extensive feature set, which promotes more integration into society.
We must look at the world around us, and face it: unless we create legal safeguards, AI will take over many jobs. Already, we can see writers, customer service representatives,
computer programmers, legal counsel, and other positions get slowly replaced. We have to enact legislation to force the majority of the workforce to be human-powered.
Additionally, privacy issues come to mind. What happens when your doctor gets replaced by a computer? Sure, it may be programmed to helpfully assist you, but what happens when people realize that your medical institution is a good target for hacking attempts? Data breaches are all too common nowadays, and health platforms aren’t any safer than other digital services.
Another data protection issue is just the simple chat interface many AI companies provide. Although specific privacy policies vary for each service provider, it’s highly likely they’re using your conversations to keep training their AI models.
Just like the American Miranda Warning “Anything you say can and will be used against you”, law enforcement agencies can also subpoena AI companies to get logs of your chat history. This means that anything you say to ChatGPT or similar AI tools can be shared with the police.
We can’t ignore the impact that generative AI will have on our youth, either. As a high school student, I can confidently say that many of my peers are using ChatGPT (or tools like it) to do their school work for them. Because my generation isn’t forming the proper skills to share our ideas, and critically think, we will be deficient in these categories.
What happens when we’re all reliant on AI for everything, and then our internet goes down? Even if all critical utilities are working, what do we do when OpenAI starts to charge people for ChatGPT use? We’ve grown dependent on these companies, so we seemingly have no choice other than to start paying up. It’s an amazing business model: find a customer base, get them hooked, and drive up the price.
Outside of just the GPT family of AI models, OpenAI also has a model called Sora. The
purpose of Sora is to provide an easy-to-use interface for creating video with purely text. It won’t be long before we start to see entire movies be created with generative AI. All you’d need are some AI ‘prompters’ to work on getting the output from Sora, editors, and other post-production crew. With just that limited staff, you would have an entire film. I feel like Sora is an excellent example of how well- intentioned innovation can eliminate a ton of jobs.
Question Everything
When it comes to new innovations, regardless of what they are - you must think deeply
about how they will influence your life. Without doing so on a societal level, we are gravely irresponsible. What matters most is that while AI can be very helpful, we need to subject these tools to ample scrutiny. Otherwise, we may be forced to live in a future where our jobs, privacy, security, and natural intelligence is at stake. When everything is automated, we also lose purpose. Not only will we lack jobs, we will lack fulfillment. If people don’t have a sense of fulfillment, many will be unhappy.
What a sad life that will be.
Comments