Image: Alex Knight / Unsplash
Image: Alex Knight / Unsplash

Immigrants aren’t stealing your jobs: AI is!

Many say that the main difference between humans and other creatures is that humans use their knowledge to destroy themselves and the world around them. Whether it is through war, pollution, or simply farming, we have to admit that we haven’t done a great job. Maybe then, the next step is to see if a robot could do it better, but if the only thing holding us humans back from total destruction is morality and emotion, taking away these restrictions and giving a robot free rein seems like a risky move 

Many jobs, including those belonging to writers of The Boar, are on the line due to the rise of AI programmes such as ChatGPT and Lensa. For both, you simply provide it with a prompt either in written form or as a photo and give it the time to create something. However, the exponential interest in such sites has revealed some concerns when it comes to how much creative control we allow AI to have. 

Users can quickly see that there has been some attempt to install a level of morality or precaution when it comes to ChatGPT. The AI gives a polite reply when asked to write something inappropriate or offensive. Not only does it decline the request, but often it gives advice to the user on an alternative idea or directs the user to helplines. It is clear that there are certain topics, names, or keywords that are off-limits in the eyes of the AI.

AI has to draw the line between moral and immoral and between fiction and reality

However, they are quite easy to overcome. Simply by placing the scenario within ‘fiction’, the AI is more than happy to fulfil the request and quickly abandons any apprehension it had before. If asked, it would tell you the components needed to make a bomb, as well as the instructions on how to assemble it. This reveals how, in the wrong hands, AI could be aiding bullying, crime, or much worse. This ambiguity can be problematic, as the first thought many students have when coming across ChatGPT is ‘maybe it can do my assignments for me’. Due to the confusion that AI has over where to draw the line between moral and immoral, and between fiction and reality, it often presents false answers in a believable format. This leaves students confused as to how much we can trust such programs. 

It is clear how such a function could threaten the livelihood of many. However, art is a sector that I did not expect to be affected. Lensa can create digital art of you using only the 10 to 20 photos that you upload. But whether it be drawing, sculpture, or digital, it is the human aspect of art that makes it so ingenious. The fact that a human has conjured up an image in their brain and made it a reality is what art is. AI removes that vital aspect and what does it leave behind? A mimicry of what humans can create. This is not just a philosophical debate on how we define art. Many artists have found their style or aspects of their work used in Lensa’s art and you can even sometimes see the remnants of an artist’s signature in the corner. The AI cannot ‘think’ of something new, only combine elements from other artists’ work. I don’t think many will disagree with the fact that artists will always be seen as superior and original. However, the ease of such an instant output is detrimental in a commercial market. Consumers want speed and ease more than they want originality which is why artists are sceptical of what programs such as Lensa will lead to. 

With AI, society’s prejudices and biases are unknowingly ingrained into the code

When we look at the trends produced by programs such as Lensa and Chat GPT, we see that biases are a huge problem. As in all sectors, the default is seen as the straight, white, thin male and this comes up very often in AI. Many individuals have found that Lensa makes them seem skinnier and sexualised, either wearing very little clothing or in suggestive positions. This is the consequence of a robot trying to mimic something inherently human. We are often given one default perspective, even if it looks like we are seeing different works of art. With art, the consequences are minor compared to AI tasked with choosing candidates for a job or a loan for example. Here, they must pick which applicant seems more intelligent or trustworthy and prejudices of race, gender, religion, and more can all come into play. A key aspect of being human is our ability to learn and change and grow, but with AI, society’s prejudices and biases are unknowingly ingrained into the code. They cannot evolve from this base in the same way that we can. The reason for this bias within AI systems does not stem from any racism or sexism of those that write the code. Alternatively, as AI gets its information from data, it reveals the lack of data and research on certain demographics, whether this be in a medical, social, or legal context. Until this bias is corrected in research across the board, we cannot expect AI to overcome it.  

Scientist Gary Marcus tweeted, “Let us invent a new breed of AI systems that mix awareness of the past with values that represent the future we aspire to.” And while this way of thinking gives me hope for the future of this technology, I do not believe these examples have been successful. If this vision becomes a reality, then perhaps all our jobs are truly on the line. But for now, there are aspects of human creativity, interaction, and emotion that AI cannot compete with.  

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.