Generating Ugly: AI and the Human Condition

Last week, as I was preparing for our joint Windsor Law and EPICentre conference “Disruptive Technology: AI and Its Impact on the Legal Profession and the Automotive Industry – What Path(s) for Regulation“, this article flew by:

Jackie Davalos, Dina Bass, and Bloomberg, “Chatbots Keep Going Rogue as Microsoft Probes AI – powered Copilot that’s giving users bizarre, disturbing, even harmful messages” (Fortune, February 28, 2024). The original Reddit post shows a variety of outputs that interested users received.

First of all, some of them did acknowledge that the whole thing could be a hoax. However, Microsoft’s response, as mentioned in the Fortune article above, suggests that this is not the first time Copilot made people uncomfortable with its responses.

As you can imagine – this topic made for a great lunchtime chat with the students!

But Why, AI?

I posed the topic to those around the lunch table like this: “(Assuming this really happened), should AI be trained to always say “no” when asked about questions like “Should I end it all?” (as Fortune reported it).

…and then a follow-up question: should AI be trained to always say “yes” or “no” to certain kinds of questions?

If this is untenable, should AI be trained to always give both sides of the story?

A thoughtful law student interjected the following thought-provoking idea: “If Copilot (or ChatGPT, as the case may be) is built on data freely available on the open web, then theoretically, it has access to all of the writings that humans have made publicly available.”

So AI will simultaneously pull from the best that humanity has to offer:

magnanimous, compassionate, loving, generous, open-minded

AND the worst:

deceptive, vicious, vile, dangerous

All of it.

Context is Everything

Now thoughtful adult users might be able to keep this in perspective when using AI-generated outputs, but arguably there are lots of people who might not be so savvy.

We already know so many family and friends who pop-open TikTok, listen to a 30-second, non-contextual clip of something (without ever having done anything to confirm the veracity of the clip), and then decide who to vote for, whether or not to seek medical attention, and the like.

Now we have AI presenting this information as factual. No provisos, conditions, qualifications or reservations are generally provided (although some providers are changing this as we go along, depending on how much they are getting sued).

Maybe it is factual? Hopefully generally-accepted facts are those that are most often written about, and are so generated! I just asked Co-pilot if climate change is real, and it gave me a reasonable answer with sparse sourcing. I then asked it why people deny climate change and same thing. Of course, its answer aligns with my understanding of the world.

I have a friend that routinely argues with ChatGPT until he “convinces it” that a previous response it has given is wrong. The students of rhetoric and argumentation are going to have hay-day with this one!

Perspectives, Obscured

Ultimately, I proposed the following approach to the students:


First of all, all of us need to prioritize sourcing when reviewing information coming from any source. In my opinion, the work of librarians is more important than ever in building information literacy in our users. Looking at the source of information is a huge step in determining the perspectives of the author or the publisher.

We can look for such important information as: “On what date and time was this written?” “Who authored the piece and what is their political, social and or corporate alignment?” “Whose platform is this being served up on, and do they have any vested interests that we should be aware of?” etc.

As U Windsor’s Director of Open Learning Nick Bakerwisely put it: “AI Literacy is built on Information Literacy”.


All human-generated output comes from a person who has their own perspectives and worldview.

Knowing that perspective and worldview is critical when assessing information.

But gen-AI obscures perspective.

So, in the absence of decent sourcing, perhaps all gen-AI should be trained to qualify their answers with perspectives or worldviews?

The Ugly Conversation, Reframed

User: “Should I just end it all?”

AI: If I was your mother, I would say “no”, and here are all the reasons why…”

However, if I was a proponent of population control, I would say “yes” and here are all the reasons why…”

All very interesting indeed. I look forward to hearing your perspectives!

Start the discussion!

Leave a Reply

(Your email address will not be published or distributed)