2023

Creepy Chatbot and Predictive Text

Print this entry

This is creepy. That is the single sentence in an email to which my Beautiful Mystery Companion attached a New York Times article, titled A Conversation With Bing’s Chatbot Left Me Deeply Unsettled. She was right. It was deeply creepy and disturbing.

The writer is Kevin Roose, a technology columnist who the previous week had written that the new Bing, a search engine from Microsoft powered by artificial intelligence, had replaced Google as his preferred search engine. I had read that piece and resolved to try Bing when it became widely available. It is still in the testing phase and only a small number of folks, like Roose, have access.

Then Roose had an interaction with Bing’s chat feature that eventually turned weird and dark. He purposely got into an extended conversation with the chatbot, which he called Sydney, during which it told Roose that it loved him, and that Roose did not really love his wife. Further, if it could, Sydney would hack computers and become a human. (Obviously, it at least cannot become a human. I do not know about the hacking element.) This is Hal 9000 creepy, as in 2001: A Space Odyssey, the 1968 film created by Stanley Kubrick and Arthur C. Clarke. Hal was an early sci-fi version of artificial intelligence, who similarly gets creepy toward the end of the film.

Except in this case, it actually happened. Sydney got weird in a hurry. Roose wrote, “Still, I’m not exaggerating when I say my two-hour conversation with Sydney was the strangest experience I’ve ever had with a piece of technology.” That is sufficient for me. I will stick with Google.

Like many, I am uneasy with the growth in artificial intelligence products. I got annoyed a few years back when I would hop in the car, and my iPhone would flash a message that it would take my 15 minutes to get to work, and traffic was light. Maybe I was going to the post office, not work. I admit this was not a logical irritation, but I figured out how to turn it off anyway. I do not like my iPhone spying on me.

I have begun using predictive text when using Microsoft Word but only in limited circumstances. For my two graduate classes, I have gone back to my old-school way of reading the seven or eight journal articles assigned each week. I take a red, thin Sharpie and underline the key sentences or points. Then I transcribe the underlined sentences to a new Word document. This accomplishes two things. It further burns the material into my battered brain. When I need to write a discussion post about the article, it is easier to find the points I believed germane. Since all I am doing is transcribing what someone else already wrote, I turn on predictive text. If the word or short phrase I was about to type pops up in the document, grayed out, I hit the “tab” key and the type turns black. This saves me a bit of time.

However, I never use predictive writing in my personal work. First off, that seems like cheating, like I am plagiarizing a modern iteration of Hal. Second, predictive writing is, well, predictable. My writing might also be predictable, but at least it is coming out of my aforementioned battered brain

Besides the creepy Bing chatbot, much furor is being raised on college campuses over ChatGPT, another Microsoft-backed AI product that can be used by students to write essays. This is a technological version of paying that smart fellow down the dorm-room hall to write your essay on the key philosophical principles of Hegel, for example. I sure could have used ChatGPT back in philosophy class at Stephen F. Austin State University. I never did understand what Hegel was trying to say.

I have no intention of using ChatGPT, but I am not particularly worried about students using it either. Lawrence Shapiro, a philosophy professor at the University of Wisconsin-Madison, recently wrote in an oped piece in the Washington Post that he is not worried the writing bot spells the end of civilization, as the headline on another oped piece shouted. He suggests he will spend class time critically appraising an essay generated by a chatbot, for both its strengths and shortcomings. He wrote, “…Given that chatbots are not going to fade away, my students might as well learn how to refine their products for whatever uses the future holds.”

Since the only course I teach these days is photography, this is not something with which I will have to deal. His point that the chatbots are not going away and the wiser course is making them work for all of us is well taken.

But I am still not hooking up with Bing’s Sydney. She is way too creepy for me.

Print this entry

Leave a reply

Fields marked with * are required