Generative AI: Where it can go wrong
You’re rushing a report out for your boss due next week. You read that ChatGPT can generate reports in an instant. It even passed a bar exam and a whole bunch of other tests.
Soon you’re wondering: Can I use ChatGPT to hack it?
Well, you won’t be the first one.
With ChatGPT reaching 100 million monthly users in January 2023, students and employees have been using it to reduce their workload.
AI models like ChatGPT are known as ‘Generative AI’, and yes, they’ve gained considerable attention for their ability to generate diverse content, ranging from articles and photos to artwork and code.
What is AI and generative AI?
You’d probably know this by now, but ‘AI’ stands for Artificial Intelligence. AI refers to the development of computer systems capable of performing tasks that typically require human intelligence. How? By creating algorithms and models that enable machines to understand, reason, learn, and make decisions.
Okay, but what about generative AI?
Generative AI is a subset of AI. It focuses on creating new content, such as images, text, or audio, rather than analysing or understanding existing data. Hence, the word ‘generate.’ Put simply, generative AI identifies patterns within existing data and generates new content based on those patterns.
Why are we only hearing about it now?
Depending on who you ask, the beginnings of generative AI can be anywhere from the 1960s, when machine learning was conceived, or as recent as 2014, with the birth of the GANS concept, a machine learning model. More recently, Generative AI has seen a surge in progress and interest due to:
-
Advancements in hardware and cloud computing
-
Open-source technologies like Hadoop for scalable AI
-
Strategic investments and increased awareness
-
Breakthroughs in machine learning and neural networks
-
Availability of big data for training generative models
Of course, the most recent Generative AI milestone happened in the 2020s, with the launch of ChatGPT and DALL-E by the artificial intelligence company OpenAI. Since then, other competitors such as MidJourney or Google’s Bard have emerged.
The risks
While generative AI promises to make our jobs easier, it’s crucial to be aware of the potential risks and consequences associated with its use in workplaces, schools, and other institutes of higher learning.
After all, companies like Samsung and Amazon have warned employees against using ChatGPT for fear of security risks. At its best, generative AI is a wonderful assistant, increasing your efficiency and output at work. At its worst… it can probably get you fired. Or sued.
In this article, we’ll take a closer look to provide you with a friendly and informed guide to navigating the intricacies of generative AI. We will cover some common pitfalls that could have embarrassing or even career-ending implications.
Let’s have a look:
1. Risks of Unverified Content or ‘hallucinations’
Generative AI, despite its advanced capabilities, is not infallible. Sometimes, it generates falsehoods and inaccuracies with an incredible degree of confidence. Some people also call these AI hallucinations. For that reason, it’s crucial to verify the AI-generated content before using or sharing it with your boss. Or bosses. Failing to do so can result in embarrassing situations, where inaccurate or false, or outdated information is disseminated.
For example: Here’s what ChatGPT told us when we asked for the location of GovTech Headquarters.
This information isn’t exactly wrong, but it is inaccurate. Our HQ is actually at 10 Pasir Panjang Road, while the GovTech Hive is at the Sandcrawler instead. Given that ChatGPT has a knowledge cutoff in September 2021, it’s not guaranteed to provide correct, up-to-date answers! For best results, always remember to verify the content with a human. Remember, AI can generate content, but it can’t take responsibility. After all, your boss is unlikely to accept ‘ChatGPT got it wrong’ as an answer.
2. Risk of plagiarism or infringement
Generative AI learns and generates content by analysing vast amounts of data, including past works. This means there’s a chance that generated content can closely mirror someone’s work; to the point where it can be seen as intellectual property theft.
Of course, intentional IP theft or plagiarism is a whole issue altogether, this is already a problem in schools, where students are using ChatGPT to write essays. And while Singapore acknowledges that there are opportunities to use AI in learning, its schools are also employing various strategies to detect plagiarism in assignments – including technological tools!
And besides…how will you learn to build a cohesive and coherent argument? Isn’t that the point of school?
By over-relying on ChatGPT, students won’t be able to hone their creativity, critical thinking, reasoning and problem-solving skills – the things that help differentiate humans from AI.
3. Unintended Biases
Last but not least, Generative AI models learn from existing data, which can contain and perpetuate biases. If not carefully curated, these biases can appear in generated content.
Instances have occurred where people utilised AI-generated systems without considering the potential biases in the training data, resulting in outputs that reinforced discrimination or unfair practices. For example, DALL-E, an AI model that generates images from text prompts, was recently revealed to have both racist and sexist biases. For example, prompts for ‘CEO’ tended to generate only images of white men. At the same time, prompts ‘personal assistant’ or ‘nurse’ generated images of women instead.
In another incident, robots programmed with a popular artificial intelligence algorithm associated words like ‘homemaker’ or ‘janitor’ or ‘criminal’ with certain ethnicities.
Yikes.
How can you protect yourself?
To protect yourself from the unintended consequences of using generative AI, here are three simple pointers to keep in mind:
-
Always verify and review the outputs generated by AI tools before utilising or sharing them. Don’t rely solely on Generative AI, as human judgement remains critical for quality control. While you provide the 'sense-check,' crosscheck facts with a good ol’ Google search, official websites, credible sources or with a human expert.
-
Establish ethical frameworks and guidelines for the use of generative AI within your team If you work in a team, promoting awareness about the potential risks and consequences will help everyone exercise caution and responsibility when using AI-generated content. This is especially important if multiple people are sharing responsibility – you don’t want to get into trouble because someone else used technology irresponsibly.
-
Stay updated on the latest developments and best practices in generative AI. Engage in ongoing education and training to understand the nuances of the technology and its associated risks, ensuring that you are equipped to make informed decisions.
PS: Our TechNews blog is one such place to learn about these updates!
Remember…
Generative AI offers remarkable possibilities for enhancing productivity and creativity, but it also poses some real risks in the workplace. Ultimately, it’s a tool, and how skillfully it’s wielded depends on its user - you!