People think AIs are conscious. What could this mean for bots in OpenSim?

(Image by Maria Korolov via Adobe Firefly.)

I’ve been interacting with OpenSim bots — or NPCs — for nearly as long as I’ve been covering OpenSim. Which is about 15 years. (Oh my God, has it really been that long?)

I’ve been hoping that OpenSim writing would become by day job, but, unfortunately, OpenSim never really took off. Instead, I covered cybersecurity and, more recently, generative AI.

But then I saw some reporting about a new studies about AI, and immediately thought — this could really be something in OpenSim.

The study was published this past April in the journal Neuroscience of Consciousness, and it showed that a majority of people – 67% to be exact – attribute some degree of consciousness to ChatGPT. And the more people use these AI systems, the more likely they are to see them as conscious entities.

Then, in May, another study showed that 54% of people, after a conversation with ChatGPT, thought it was a real person.

Now, I’m not saying that OpenSim grid owners should run out and install a bunch of bots on their grids that pretend to be real people, in order to lure in more users. That would be dumb, expensive, a waste of resources, possibly illegal and definitely unethical.

But if users knew that these bots were powered by AI and understood that they’re not real people, they might still enjoy interacting with them and develop attachments to them — just like we get attached to brands, or cartoon animals, or characters in a novel. Or, yes, virtual girlfriends or boyfriends.

In the video below, you can see OpenAI’s recent GPT-4o presentation. Yup, the one where ChatGPT sounds suspiciously like Scarlett Johansson in “Her.” I’ve set it to start at the point in the video where they’re talking to her.

I can see why ScarJo got upset — and why that particular voice is no longer available as an option.

Now, as I write this, the voice chatbot they’re demonstrating isn’t widely available yet. But the text version is — and its the text interface that’s most common in OpenSim anyway.

GPT-4o does cost money. It costs money to send it a question and to get a response. A million tokens worth of questions — or 750,000 words — costs $5, and a million token’s worth of response costs $15.

A page of text is roughly 250 words, so a million tokens is about 3,000 pages. So, for $20, you can get a lot of back-and-forth. But there are also cheaper platforms.

Anthropic’s Claude, for example, which has tested better than ChatGPT in some benchmarks, costs a bit less — $3 for a million input tokens, and $15 for a million output tokens.

But there are also free, open-source platforms that you run on your own servers with comparable performance levels. For example, on the LMSYS Chatbot Arena Leaderboard, OpenAI’s GPT-4o in in first place with a score of 1287, Claude 3.5 Sonnet is close behind with 1272, and the (mostly) open source Llama 3 from Meta is not too far distant, with a score of 1207 — and there are several other open source AI platforms at the top of the charts, including Google’s Gemma, NVIDIA’s Nemotron, Cohere’s Command R+, Alibaba’s Qwen2, and Mistral.

I can easily see an OpenSim hosting provider adding an AI service to their package deals.

(Image by Maria Korolov via Adobe Firefly.)

Imagine the potential for creating truly immersive experiences in OpenSim and other virtual environments. If users are predisposed to see AI entities as conscious, we could create non-player characters that feel incredibly real and responsive.

This could revolutionize storytelling, education, and social interactions in virtual spaces.

We could have bots that users can form meaningful relationships with, AI-driven characters that can adapt to individual user preferences, and virtual environments that feel alive and dynamic.

And then there’s the potential for interactive storytelling and games, with quests and narratives that are more engaging than ever before, create virtual assistants that feel like true companions, or even build communities that blur the lines between AI and human participants.

For those using OpenSim for work, there are also applications here for business and education, in the form of AI tutors, AI executive assistants, AI sales agents, and more.

However, as much as I’m thrilled by these possibilities, I can’t help but feel a twinge of concern.

As the study authors point out, there are some risks to AIs that feel real.

(Image by Maria Korolov via Adobe Firefly.)

First, there’s the risk of emotional attachment. If users start to view AI entities as conscious beings, they might form deep, potentially unhealthy bonds with these virtual characters. This could lead to a range of issues, from social isolation in the real world to emotional distress if these AI entities are altered or removed.

We’re already seeing that, with people feeling real distress when their virtual girlfriends are turned off.

Then there’s the question of blurred reality. As the line between AI and human interactions becomes less clear, users might struggle to distinguish between the two.

Personally, I’m not too concerned about this one. We’ve had people complaining that other people couldn’t tell fantasy from reality since the days of Don Quixote. Probably even earlier. There were probably cave people sitting around, saying, “Look at the young people with all their cave paintings. They could be out actually hunting, and instead they sit around the cave looking at the paintings.”

Or even earlier, when language was invented. “Look at those young people, sitting around talking about hunting, instead of going out there into the jungle and catching something.”

When movies were first invented, when people started getting “addicted” to television, or video games… we’ve always had moral panics about new media.

The thing is, those moral panics were also, to some extent, justified. Maybe the pulp novels that the printing press gave us didn’t rot our brains. But Mao’s Little Red Book, the Communist Manifesto, that thing that Hitler wrote that I don’t even was aided and abetted by the books they wrote.

So that’s what I’m most worried about — the potential for exploitation. Bad actors could misuse our tendency to anthropomorphize AI, creating deceptive or manipulative experiences that take advantage of users’ emotional connections and lead them to be more tolerant of evil.

But I don’t think that’s something that we, in OpenSim, have to worry about. Our platform doesn’t have the kind of reach it would take to create a new dictator!

I think the worst that would happen is that people might get so engaged that they spend a few dollars more than they planned to spend.



Source: Hypergrid Business