Top
image credit: Freepik

AI Highlights the Ethical Issues in Software Development

February 29, 2024

Category:

“Hey, Siri…” We’ve come a long way since artificial intelligence made its debut in the mainstream. Over the last few years, we’ve seen software developers race to create the next best AI tool. And while AI has already proven to be an invaluable technological asset, it’s also highlighted the ethical questions that have echoed over the discourse on AI since machine learning’s capabilities were first hypothesized. Sci-fi Hollywood would have us believe that the real danger is when “the robots rise up” and “take over”, but the imminent danger AI poses is, in some cases, far more sinister; the slow erosion of human resources, the loss of creativity, and the exaggeration of biases, to name a few. 

In a fast-growing industry, estimated to grow to $1.8trn by 2030, rampant development runs unchecked, and the question remains, should some software simply be left alone? Where are the ethical checks and balances, and what safeguards are developers implementing to prevent the perversion of their creations? While AI is created to enhance the human experience, we’re seeing it hold up a mirror, with some of the biggest ethical issues staring back at us. 

AI and Democracy

One of the biggest areas of concern when it comes to software subversion is how AI tools can be used to undermine democratic principles. Not inherently insidious, AI offers powerful data analysis, which can be helpful in segmenting constituencies and helping politicians drill down on the issues affecting different people, from different socio-economic, geographic, and demographic groups. 

From a campaign perspective, AI has proven useful in reducing the costs of producing campaign material, allowing for posters, videos, infographics and more to be generated at a fraction of the cost. In fact, political contenders in Argentina have relied heavily on AI-generated images to capture the minds and hearts of voters and dissuade them from their opponents in their most recent elections.

The ethical issue with generative AI is how it can be used to create and spread misinformation, disinformation, and propaganda. Already, countries like Venezuela and China have sought out deepfake technology to promote the leading parties and governments in an effort to retain power and drown out dissenting voices. Without a regulatory framework or legal ramifications to curb this activity, there’s no end in sight for these practices. And with over fifty nations going to the polls this year, we’re likely to see a sharp increase in AI being used to subvert truth, transparency, and ultimately democracy. 

AI Needs to Overcome Bias

For businesses, and as a result, society as well, we need to consider the impact AI bias has on marginalized communities. In a world already fraught with discrimination along the lines of gender, race, sexuality, socio-economic status, and more, AI has the potential to increase inequality. This will remove many of the hard-won gains made by vulnerable people through eons of advocacy, protest, and lobbying. A study from the University of California, Berkeley, found that biased algorithms in fintech tools created major discrepancies for Latino and Afro-American clients applying for mortgages. They were charged higher interest rates, which cumulatively cost them an additional $765 million per year. 

Artificial intelligence is data-heavy. Suppose there’s bias in the data input. In that case, there will be bias in the analytical output, and the ethical implications of this is a murky area, because more often than not, humans have unconscious bias too, or it may just be a blind spot. A classic example of this came from a tech company developing a CV screening tool. 

The software developers fed the machine data on what an ideal candidate’s CV looks like, based on their own previous hires. Being a predominantly male team, the AI deduced that male applicants were good candidates. With companies scrambling to implement AI-driven tools, the mass adoption of biased software could have disastrous results for thousands of applicants, and hundreds of companies. 

Artificial Creativity

One of the more high-profile cases of people fighting against the unethical use of AI-driven software was the 2023 SAG-AFTRA and Writers Guild of America strike. 118 days long, this dominated headlines as the actors, writers, and other players in the creative industry refused to continue working on projects until an agreement was reached, guaranteeing better pay, working conditions, and royalties for employees. 

One of the most contentious issues throughout the strike involved studios’ wanting to negotiate contracts with actors, stipulating they could use artificial intelligence to reproduce their likeness. In the most extreme iteration of this clause, it could mean that sometime in the not-so-distant future, actors would come in for a day or two to be “scanned”. Their voice, body, and face could be reproduced for dozens of projects for years at a time, with compensation only granted for the one day of “work” they provided. 

While it seems far-fetched, in Disney’s Star Wars Prequel, Rogue One, late actor Peter Cushing’s likeness was digitally recreated for his character, Grand Moff Tarkin. This promoted many actors to include contract stipulations that specifically denied the use of their likeness posthumously. 

Writers have also voiced concerns about the use of generative AI to produce scripts and storylines, effectively eliminating their job prospects, and narrowing the scope of the creative output we see in media. In linking back to the previous point, if biases are perpetuated in AI algorithms, then the stories of marginalized people are also at risk of being erased. Overall, AI in the creative industry can be a tool used positively to enhance certain aspects of film and TV production. However, it comes at the risk of limiting creative expression, and where ethics are concerned, exploiting actors, writers, visual effects artists, and others in the industry. 

AI and the Risk of Fraud

Statistics show that cybercrime, scams, and fraud are on the rise, with cases in the United States jumping from 467,000 in 2019 to just over 800,000 by 2022, accruing a cost of $10.3 billion. Artificial intelligence has the potential to exacerbate this issue, with criminals being able to quickly and easily develop sophisticated schemes to mislead unsuspecting victims. 

While the software is developed with the best of intentions, there are certain features that can make it attractive to fraudsters. The ease of creating a Gmail account in minutes offers users unmatched convenience. Unfortunately, it also makes it easier for threat actors to create thousands of fake email addresses to dupe unsuspecting victims. The same is true of AI. Here are some of the features of artificial intelligence that have been used in fraudulent activity: 

  • The speed and efficiency that AI provides can and has been used to automate fraudulent transactions, owing to its ability to process large amounts of data. 
  • AI can sometimes have a “black box” effect, where input and operational data are essentially invisible, guaranteeing anonymity. In such cases, it would be impossible to trace the fraudulent activity back to any one person or organization. 
  • Artificial intelligence makes automation quicker and easier, which is often used in scams that send out mass emails and texts. Generative AI can make these messages look realistic and credible, mimicking real communication from companies. 
  • Linked to the point above is the concern that generative AI can be used to create fake documents, voice messages, and even realistic videos that can be used in high-level scams and fraudulent activities. 

The reality is that fraud is on the rise, and AI can be used to exacerbate the rate of online criminal activity. While software developers and engineers do their best to create guardrails against their tools being subverted, there is always the possibility and probability that ill-intentioned users will use their software to dupe others. 

Conclusion

From the humble beginnings of AI as a voice assistant on our cell phones being used to set reminders, to the ability to generate high-quality videos from a few prompts, this technology and its software have rapidly developed. ChatGPT ushered in an era of AI infatuation with millions fawning over the impressive capabilities of the tool to rapidly generate responses to an infinite number of requests. 

But as we come down from the initial high, we’re noticing the glaring inequities and are concerned about the impending risks. There are major ethical questions that leaders across a number of fields are grappling with when it comes to the use of AI-driven software tools. 

Unchecked bias, the erosion of democracy, the loss of authentic creativity and storytelling, and the ability to effortlessly mislead and defraud people. These are just some factors to consider in how software is developed, who has access to it, and what guardrails should be put into place to prevent, as far as possible, the impending doom all the sci-fi movies warned us about.