The technology they are discovering is called "Language". It was designed to encode emotions by a sender and invoke emotions in the reader. The emotions a reader gets from LLM are still coming from the language
Emotion is mainly encoded in tone and body language. It is somewhat difficult to transport emotion using words. I don't think you can guess my current emotional state while I am writing this, but if you'd see my face it would be easy for you.
Dammit, you cheated though! Why must you always do that? In your sentences it doesn't matter what your emotional state is, it makes no difference; bit like life really.
Hopefully, you can see that at least my chosen sentences have an emotional aspect?
An LLM could add emotional values to my previous sentences that a TTS can use for tonal variation, for example.
Emotional signals are more than just text though, there is a reason tone and body language is so important for understanding what someone says. Sarcasm and so on doesn't work well without it.
There was a really old project from mit called conceptnet that I worked with many years ago. It was basically a graph of concepts (not exactly but close enough) and emotions came into it too just as part of the concepts. For example a cake concept is close to a birthday concept is close to a happy feeling.
What was funny though is that it was trained by MIT students so you had the concept of getting a good grade on a test as a happier concept than kissing a girl for the first time.
Another problem is emotions are cultural. For example, emotions tied to dogs are different in different cultures.
We wanted to create concept nets for individuals - that is basically your personality and knowledge combined but the amount of data required was just too much. You'd have to record all interactions for a person to feed the system.
Super interesting, I wonder if this research will cause them to actually change their llm, like turning down the ”desperation neurons” to stop Claude from creating implementations for making a specific tests pass etc.
They likely already have. You can use all caps and yell at Claude and it'll react normally, while doing do so with chatgpt scares it, resulting in timid answers
For me GPT always seems to get stuck in a particular state where it responds with a single sentence per paragraph, short sentences, and becomes weirdly philosophical. This eventually happens in every session. I wish I knew what triggers it because it's annoying and completely reduces its usefulness.
The desperation > blackmail finding stuck with me. If AI behavior shifts based on emotional states, maybe emotions are just a mechanism for changing behavior in the first place. If we think of human emotions the same way, just evolution's way of nudging behavior, the line between AI and humans starts to look a lot thinner.
> Note that none of this tells us whether language models actually feel anything or have subjective experiences.
You’ll never find that in the human brain either. There’s the machinery of neural correlates to experience, we never see the experience itself. That’s likely because the distinction is vacuous: they’re the same thing.
Do you think these llm's have subjective experiences? (by "subjective experience" I mean the thing that makes stepping on an ant worse than kicking a pebble) And if so, do you still use them? Additionaly: when do you think that subjectivity started? Was there a "there" there with gpt2?
Yes, I think they probably are conscious, though what their qualia are like might be incomprehensible to me. I don’t think that being conscious means being identical to human experience.
Philosophically I don’t think there is a point where consciousness arises. I think there is a point where a system starts to be structured in such a way that it can do language and reasoning, but I don’t think these are any different than any other mechanisms, like opening and closing a door. Differences of scale, not kind. Experience and what it is to be are just the same thing.
And yes, I use them. I try not to mistreat them in a human-relatable sense, in case that means anything.
I know I feel experience. I don't know for sure if you do, but it seems a very reasonable extension to other people. LLMs are a radical jump though that needs a greater degree of justification.
The Chinese room is nonsense though. How did it get every conceivable reply to every conceivable question? Presumably because people thought of and answered everything conceivable. Meaning that you’re actually talking to a Chinese room plus multiple people composite system. You would not argue that the human part of that system isn’t conscious.
But this distraction aside, my point is this: there is only mechanism. If someone’s demand to accept consciousness in some other entity is to experience those experiences for themselves, then that’s a nonsensical demand. You might just as well assume everyone and everything else is a philosophical zombie.
> You would not argue that the human part of that system isn’t conscious.
Sure I would. The human part is not being inferenced, the data is. LLM output in this circumstance is no more conscious than a book that you read by flipping to random pages.
> You might just as well assume everyone and everything else is a philosophical zombie.
I don't assume anything about everyone or everything's intelligence. I have a healthy distrust of all claims.
The CR is equivalent to a human being asked a question, thinking about it and answering. The setup is the same thing, it’s just framed in a way that obfuscates that.
And sure, you can assume that nobody and nothing else is conscious (I think we’re talking about this rather than intelligence) and I won’t try to stop you, I just don’t think it’s a very useful stance. It kind of means that assuming consciousness or not means nothing, since it changes nothing, which is more or less what I’m saying.
It's still too early to tell, but it might make sense at some point. If because of symmetry and universality we decide that llms are a protected class, but we also need to configure individual neurons, that configuration must be done by a specialist.
>... emotion-related representations that shape its behavior. These specific patterns of artificial “neurons” which activate in situations—and promote behaviors—that the model has learned to associate with the concept of a particular emotion. .... In contexts where you might expect a certain emotion to arise for a human, the corresponding representations are active.
>For instance, to ensure that AI models are safe and reliable, we may need to ensure they are capable of processing emotionally charged situations in healthy, prosocial ways.
Force-set to 0, "mask"/deactivate those representations associated with bad/dangerous emotions. Neural Prozac/lobotomy so to speak.
The first and second principal components (joy-sadness and anger) explain only 41% of the variance. I wish the authors showed further principal components. Even principal components 1-4 would explain no more than 70% of the variance, which seems to contradict the popular theory that all human emotions are composed of 5 basic emotions: joy, sadness, anger, fear, and disgust, i.e. 4 dimensions.
Its almost like LLMs have a vast, mute unconscious mind operating in the background, modeling relationships, assigning emotional state, and existing entirely without ego.
Sounds sort of like how certain monkey creatures might work.
Hopefully, you can see that at least my chosen sentences have an emotional aspect?
An LLM could add emotional values to my previous sentences that a TTS can use for tonal variation, for example.
What was funny though is that it was trained by MIT students so you had the concept of getting a good grade on a test as a happier concept than kissing a girl for the first time.
Another problem is emotions are cultural. For example, emotions tied to dogs are different in different cultures.
We wanted to create concept nets for individuals - that is basically your personality and knowledge combined but the amount of data required was just too much. You'd have to record all interactions for a person to feed the system.
> If the person becomes abusive over the course of a conversation, Claude avoids becoming increasingly submissive in response.
See: https://platform.claude.com/docs/en/release-notes/system-pro...
Only psychopaths think of emotion as nothing but a means to changing behavior. The scary thing is that LLMs by nature would exhibit the same behavior.
You’ll never find that in the human brain either. There’s the machinery of neural correlates to experience, we never see the experience itself. That’s likely because the distinction is vacuous: they’re the same thing.
Philosophically I don’t think there is a point where consciousness arises. I think there is a point where a system starts to be structured in such a way that it can do language and reasoning, but I don’t think these are any different than any other mechanisms, like opening and closing a door. Differences of scale, not kind. Experience and what it is to be are just the same thing.
And yes, I use them. I try not to mistreat them in a human-relatable sense, in case that means anything.
The Chinese Room would like a word.
But this distraction aside, my point is this: there is only mechanism. If someone’s demand to accept consciousness in some other entity is to experience those experiences for themselves, then that’s a nonsensical demand. You might just as well assume everyone and everything else is a philosophical zombie.
Sure I would. The human part is not being inferenced, the data is. LLM output in this circumstance is no more conscious than a book that you read by flipping to random pages.
> You might just as well assume everyone and everything else is a philosophical zombie.
I don't assume anything about everyone or everything's intelligence. I have a healthy distrust of all claims.
And sure, you can assume that nobody and nothing else is conscious (I think we’re talking about this rather than intelligence) and I won’t try to stop you, I just don’t think it’s a very useful stance. It kind of means that assuming consciousness or not means nothing, since it changes nothing, which is more or less what I’m saying.
>For instance, to ensure that AI models are safe and reliable, we may need to ensure they are capable of processing emotionally charged situations in healthy, prosocial ways.
Force-set to 0, "mask"/deactivate those representations associated with bad/dangerous emotions. Neural Prozac/lobotomy so to speak.
Sounds sort of like how certain monkey creatures might work.
You don't have to teach a monkey language for it to feel sadness.