Ethics & AI Adoption

Artificial Intelligence (AI) is a powerful tool, but it comes with risks. If you are considering implementing AI software in your company, you need to understand the ethical challenges. These challenges come from weaknesses in the AI models themselves, problems with how they impact existing employees, issues with client confidentiality, potential plagiarism, and a growing public backlash.

Ethics of AI Models

Unlike traditional programs, which are told precisely what to do by programmers, most forms of AI are first given general instructions (algorithms) for processing data, and then fed (trained on) a huge amount of such data, often from across the internet. Despite the name, the AI is not intelligent or thinking in any meaningful way, it is just trying to match the patterns found in the data it was trained on and provide the answers its users want.

This approach is far more effective than it sounds, enabling AI programs to rapidly answer any question asked in professional prose or generate any image required. But it has problems. The first is that because the AI is trained on so much data, there is no individual line of code to explain certain behaviours. This makes it very difficult for programmers to work out why the AI program may have given a particular response, and this has led to AI being described as a ‘black box’ with its inner workings obscured from its creators. Furthermore, because the data is taken from across the internet, the AI will naturally inherit all the biases that humans themselves have. Early AI models were found to frequently make sexist or racist statements, or to show images that reinforce stereotypes.

While newer models have attempted to add layers of safeguards and guardrails, it remains difficult, perhaps impossible, to stop such biases occasionally surfacing. For similar reasons, AI has also been vulnerable to being ‘tricked’ into giving potentially harmful information, such as how to make weapons or drugs.

Ethics of AI in the Workplace

Your average person normally has two fears about AI. The first is that it will lead to some kind of robot uprising from a science fiction film, which is thankfully unlikely. The second, more realistic fear, is that it might lead to them losing their jobs. In reality, however, there is no AI tool that can operate without human oversight because of the tendency for AI to "hallucinate," or, to put it in plainer language, “lie.”

As mentioned above, the AI is typically only trying to predict what the most likely answer from its datasets is, not whether, that answer is right. As it’s not actually thinking, there is nothing in place to stop it simply making things up or from using out-of-date or incorrect data.

Recommendation: When using AI in any form, it’s important to help employees understand how it works and realise it cannot be a replacement for their own knowledge of the subject or for themselves. Instead, it is a tool that will allow them to do their jobs better, but processes need to be put into place to make sure people do not trust the AI too much, however convincing its responses may seem. Everything still needs to be double-checked and cross-referenced.

Client Confidentiality

When using AI to do work for clients, it’s important to realise the ethical risks of using AI models, particularly free ones, as they can easily breach client confidentiality. Most models will keep a record of any question or information users input into them, and avoiding this will often require paying for the model and specifically asking the AI company not to use such data. Even if this is agreed, the chance of the company itself being hacked or suffering a data breach cannot be ignored.

Recommendation: give employees guidelines on any sensitive or confidential information which should never be input into the AI models they are working with.

AI and Art, the danger of plagiarism.

When AI image generators such as DALL-E, Stable Diffusion, and Midjourney first came to prominence, most people were inspired by their ability to turn a simple text prompt into a realistic-looking picture. As with other forms of AI, companies were attracted by the sheer speed at which a professional image could be created, giving them something they could use in seconds instead of weeks.

Particularly over the last year however, there has been a growing backlash, as artists have realised that the AI models had been trained on their art without their permission. Worse, it was found that users of the software were actually using certain artists’ names as keywords in their prompts, thereby creating images that looked very close to the artist’s own style – but without them getting compensated for it.

Currently, the status of such generative AI remains in a legal grey area, with some artists attempting to bring legal action against AI companies, and others using technological tricks like ‘poison pills’, where they embed invisible data into their pictures, making it so if they are used by an AI they will produce bad results.

One legally safe way is to use AI art generators that have been proven not to breach copyright. For example, Adobe’s Firefly AI feature is trained on Adobe Stock images, to which they already own the copyright.

However, even companies that take this precaution are not off the hook completely, as a growing number of fans and consumers have started to turn against AI completely, believing it shortchanges the artists they love. This means that people are looking at every piece of art used by companies and trying to work out if it’s AI-generated or not. People look for artifacts in the art, unusual patterns or mistakes that human artists wouldn’t normally make, and then publicly criticise or boycott the company in question.

For example, in January 2024, the games company Wizards of the Coast was found by sharp-eyed fans to have used AI art, not in one of their actual games, but merely in the promotional materials for said game. While they made matters worse by first denying it was AI art before later admitting it was, their apology also highlighted some of the challenges they faced, because it was not themselves but one of their vendors who had used AI in the creation of the art.

Recommendation:  Use AI image generation models that are trained on only licensed art, and if using AI art in any publicly facing material, realise it is too risky to just directly use it for the finished product. Instead, it’s best to use it as a productive tool to assist your existing artists, helping them to quickly iterate and prototype new artwork. By performing the final check on the work, they can catch any irregularities or artifacts and make sure that the finished piece has the human touch your customers expect. Similarly, strong guidelines need to be communicated to any vendors you are using for art.

Conclusion

By being aware of these ethical challenges, you can be in a position to harness the many productivity benefits of AI, while mitigating some of the concerns.

Of course, such concerns are something that has also attracted the attention of regulators across the world. While some, such as the United Kingdom, have so far opted for a relatively permissive and open approach, others have already laid out strict guidelines, for example, the European Union’s March 2024 Artificial Intelligence Act.  We will explore the likely impact of such regulations on AI in a future article on this site. 

Previous
Previous

Is AI being overhyped?