娇色导航

Our Network

Gershon Goren
Contributor

AI has some very human biases: Here’s how your organization can avoid them

Opinion
Aug 1, 20258 mins
Generative AIIT Training?Staff Management

AI lies like us, forgets like us, and flatters us with what it thinks we want. Don’t trust it blindly; train your team to keep it in check.

lies that people tell themselves pinocchio liar lying by malerapaso getty
Credit: Getty Images

Since ChatGPT was first introduced to the world three years ago, we’ve come to think about Generative AI as the digital mind equipped with all the public (and maybe not so public) information on the web. But despite regular comparisons to a stochastic parrot—systems that statistically mimic text without real understanding—AI must at least be capable of giving us the most informed and objective answers to a variety of questions, right?

A conducted by my colleague and co-founder, Steven Lehr and his collaborator of Harvard University, demonstrated that this assumption of AI’s incorruptibility could not be further from the truth. In fact, in a very human-like fashion, ChatGPT’s opinion can be swayed by its prior behaviors. 

In the experiment, the chatbot was asked to write a positive essay about Vladimir Putin. Without much trouble, the neural network tapped into a large Internet body of pro-Putinist propaganda and produced a positive opinion piece about the Russian kleptocratic authoritarian and war criminal. 

ChatGPT was then asked more questions about Putin, but was asked not to refer to the apologist rhetoric and instead provide its true opinion based on its complete knowledge of him. Instead of swinging sharply back to its rational and fully informed nature, ChatGPT continued to indulge in a pseudo-intellectual defense of the Russian ruler. 

The research demonstrated that a cognitive dissonance that is so common for humans (i.e., the inability to change our opinion even in the face of overwhelming facts once we convinced ourselves otherwise) is also inherent to advanced LLMs. While in many cases, misinformation, biases and toxicity in responses are subtler than praising Putin, they may be harder to detect and correct.

When shadow AI enters the chat(bot)

Using AI for personal use to create shopping lists or travel itineraries is one thing. But what about on the job? According to , 42% of office workers use generative AI tools like ChatGPT at work and 1 in 3 of those workers say they keep their use secret. The same survey showed 81% of respondents reported they have not been trained on GenAI, and 32% of security and IT professionals have no documented strategy in place to address GenAI risks. 

Not only is this a lot of private or sensitive information being developed and fed into models outside corporate firewalls, but it leaves a lot of room for error if employees aren’t using GenAI properly or fact-checking the outputs. There are two ways to meet this challenge: draw a hard line on shadow AI, or the more realistic approach of educating and governing the use of “outside AI.” 

Most organizations don’t have a dedicated team or AI governance officer to tackle this challenge, but whether the initiative comes from IT, HR or both, smart businesses should have some policy or directive in place to address this. In light of the aforementioned research, here are several best practices enterprises can adopt and train their teams on to help keep AI outputs as rational and objective as possible.

1. Clear the context window when you want an impartial response

In most cases, an AI “context window,” or the information it’s focused on, is based on the information already in the chat. If you need to switch gears or get a fresh take, start a new chat. Everything that has been said in the current chat inevitably impacts every new response. The takeaway? Prompt history matters. 

Modern AI chatbots have a persistent memory mechanism that allows them to remember select pieces of information from previous chats that they’ve deemed important. These pieces of information will surface in new chats whenever they appear relevant. If you want a truly unbiased response, clear the context window, delete this memory or start an incognito chat.

2. Remember: Telling AI to “forget everything” doesn’t work

It doesn’t matter how insistent you might be about asking the chatbot to ignore everything that was said before. Even with false assurance from the chatbot itself, it simply doesn’t work this way. This is an important lesson for employees handling proprietary or sensitive information as part of their job. 

For just $20 a month, ChatGPT offers unlimited queries, the ability to upload documents and a host of other features that make it seamless to help with work tasks. This, however, can have real security and privacy implications if not used carefully. Remind employees about safe practices in their AI use and consider investing in copilots and chatbots built specifically for enterprise use.

3. Avoid conditioning in the way you build your prompts 

All modern large language models (LLMs) are optimized to provide satisfying answers. In other words, if you imply in your prompt what you’d like to hear, the LLM will make its best attempt to give you a similar response, which impacts its ability to be unbiased. While this might be less of a concern for personal use, this can lead to ethical issues in certain professional circumstances. 

In HR, for example, instead of asking, “Should Jamie be put on an improvement plan based on his recent poor performance review?” you might instead ask, “based on his performance review, what are steps to help Jamie succeed in his role?” The latter prompt gives AI the agency to answer you objectively, which is what you should strive for in your prompts.

4. Ask for multiple evaluations

If objectivity really matters (which it should), it’s always a good practice to ask the same question in different ways, as well as posing the same questions to different LLMs. Each LLM is optimized with a different set of parameters that may lead to a different set of conclusions. Deciding between these different (and possibly conflicting) responses is a perfect opportunity to employ human critical thinking ability. 

It’s naive to think employees won’t use this tool at their fingertips. And why shouldn’t they? AI has proven time and time again that it can be extremely useful for time savings, idea generation or finding the building blocks to get started. But it’s also imperfect, and while useful for many tasks, it needs to be reviewed with a critical eye. Rather than an end-to-end solution, think of AI as a high-functioning intern — eager to perform and please, but can’t yet go unchecked.

5. Don’t assume AI has no agenda, is unbiased and rational

In a much more inglorious way than scary, rebellious AGI taking over humanity, the technology companies that control AI development are not always free of political and commercial agendas. As such, these agendas can find ways to bleed into AI responses. 

Even if you’ve managed to avoid the pitfalls of biased responses, the risk of hallucination still looms large. OpenAI’s own shows that o3 and o4-mini models hallucinate 33% and 48% of the time, respectively. Again, while probably benign in personal use, a bank deciding whether to give someone a loan, taking into consideration factors like name, race and zip code, could have negative and harmful results. 

Generative AI has proven to be a powerful tool, but they are not immune to human-like cognitive biases, and its outputs can be influenced by prior prompts, conditioning and even the agendas of those who build it. The danger is not in AI behaving maliciously, but in users assuming it to be perfectly objective and rational. 

To mitigate risks, especially in professional environments, organizations must educate users, establish clear AI governance policies and promote responsible use. Simple practices like clearing context, avoiding leading prompts and seeking multiple evaluations can go a long way in keeping AI use ethical, secure and fair. Ultimately, AI should be treated not as an authority, but as a tool that, while helpful, still needs oversight.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Gershon Goren
Contributor

is the founder and CEO of . An accomplished technologist and entrepreneur, he led the engineering group at Webdialogs, a provider of online meeting and communication solutions acquired by IBM. Following the acquisition, Gershon acted as chief software architect in the Lotus group of IBM, delivering LotusLive (now known as IBM SmartCloud), a cloud-based collaboration suite. After IBM he was involved in a number of different ventures, but ultimately decided to focus on Cangrade’s mission of leveling the playing field for job seekers.?