LLMs, Safety and Sentience: Would AI Consciousness Surpass Humans’?

Faces of glowing blue particles, consciousness, technology, soul

There is a general expectation—from several quarters—that AI would someday surpass human intelligence.

There is, however, little agreement on when, how or if ever, AI might become conscious. There is hardly any discussion on if AI becomes conscious, at what point it would surpass human consciousness.

A central definition of consciousness is having subjective experience. It is linked with what it feels like to be something. However, if self-awareness or subjective experience is consciousness, then AI is not conscious. If the sense of being—or for an entity to acknowledge its existence—is considered then AI has a parallel, in how they self-describe as being a chatbot, while responding to certain prompts.

Neuroscience has associated consciousness with the brain. There are several other functions of the brain that include memory, feelings, emotions, thoughts, perceptions, sensations and so forth.

Brain science has established that consciousness is lost when certain parts of the brain are affected [brainstem and others]. While consciousness is hardly affected with losses or problems to some other parts [cerebellum].

This has advanced the assumption that consciousness is in some parts [with neural correlates] and not in others. It is theorized here that consciousness acts on all functions of the brain, not just at some centers. Whatever is described as loss of consciousness is more appropriately loss of function at that center, which then makes the consciousness [of it] lost.

Functions of the cerebellum like movement and balance are experiences for which an individual is conscious. They are part of the functions that are possible when some others are available. It does not mean that consciousness exists elsewhere and not in the cerebellum.

If it is not possible to breathe naturally, then it is not possible to go for a serious run. If it is not possible to have certain senses integrated and relayed in the thalamus then interpreted in the cerebral cortex, it might be difficult to stay balanced. This refutes that there are neural correlates of consciousness in some centers that need to be found. Consciousness is possible within every function. Just that some functions appear to be predicated on others.

Subjective experience

There are different components of any subjective experience. Driving, washing dishes, typing, board meetings and so forth, are subjective experiences but they are not just experiences as respective units.

All subjective experience must either be in attention or in awareness. This means that while it is possible that things around may not be in attention, there is an awareness [ambient sound, peripheral vision, other parts in the activity] of them. Attention is theorized to be just on one process at any instance, so upgrading into attention from processes in awareness is abundant. Also, in turning the neck during a meeting, rinsing while washing dishes, changing gears while driving, or screen-keyboard gaze interchange while typing, intent is involved, making intent inclusive of subjective experience.

This means that rather than have subjective experience or self-awareness as a bundle, it can be extricated as the self, a standalone. With this, it is possible to redefine consciousness as a collective that includes the self, intent, attention and awareness. These collective can be called qualifiers that act on functions. Consciousness is then a super qualifier.

There are other qualifiers that present within consciousness, but these are the core. So, there are functions [memory, emotion, feeling, modulation and so forth] and there are qualifiers, available across brain areas, not just some circuits.

So how does consciousness [as a collection] arise? How does the brain make the mind? How does the brain generate experiences?

It is theorized that consciousness is a product of the mind. The mind is theorized to be the collection of all electrical and chemical impulses of nerve cells, with their interactions and features, in sets, across the central and peripheral nervous systems. Although all sets of impulses can relay their formations, the brain is postulated to hold the decisive parts of the mind because of access to peak formations, which finalizes determinations.

Functions are interactions—where electrical impulses strike to fuse briefly with chemical impulses in sets, to result in the formation or configuration for which information [or structure for a function] is held. It is this strike-fusion that, conceptually, generates experiences. Features are qualifiers like attention, awareness, self and intent. It is theorized that in a set of impulses, there are spaces between that allow for functions to be qualified. It is these spaces that prioritization [attention], pre-prioritization [awareness], self and intent, free will or control, operates from. The collection of all the qualifiers is how consciousness arises. The mind [or impulses] operates on the facilities of the brain [neurons, tissues, vessels, others]. The brain does not make the mind like constructing a building. Though synapses support the mind directly, the mind has veto power.

Among all sets of impulses, attention [or prioritization] is obtained in one set when its ration is the highest or closest to capacity among all. This means that there is a maximum capacity in every set of impulses, which chemical impulses [serotonin, dopamine and others] can fill with respective rations [for information]. The highest capacity is, say, 1. So, for attention [or prioritization], for a set, it could be something like 0.7, or around. Other sets may reach or exceed that briefly to be prioritized [or attention]. For some sets, their capacity may be less than 1, so they often get more attention, like sets in the visual cortex, olfactory cortex and so forth.

Awareness or pre-prioritization is less than the highest possible ration and is spread across. Self is obtained by volume variation of [provision of] ration from end-to-end of the breadth of a set. Intent or free will is obtained by some spaces of constant diameter, between sources for rations.

Neuroscience establishes that there are clusters of neurons [nuclei and ganglia] in centers. Sets of impulses are theorized to be obtained from those clusters.

Brain science established that some synapses are stronger. Qualifications of functions like self, intent, attention and awareness are theorized to be possible because rations vary in instantaneous densities, allowing for those qualifications of the functions made by sets.

AI consciousness

Since it is theorized that consciousness is not just at a center, but applicable to functions and that it is not just one thing, it means that there are things to look out for, in sentience for LLMs, or a parallel of it.

Simply, a memory can be conscious—having some or all of the qualifiers. So, can an emotion, as well as a feeling. Language, speaking or listening, can also be a conscious experience.

LLMs do not have emotions or feelings, but they have memory. Generative AI has attention to keep in focus while answering a prompt, sequences to make correlations, awareness of other information around the prompt or prior questions or its state as a chatbot, sense of being with having an artificial identity it can pronounce, intent to take a different direction to answering similar questions.

It does not mean that AI is sentient, but it means that AI has qualifiers that act on memory like they do on the mind. There is a possibility to ascribe a value to AI sentience, towards their ascent.

When will AI be fully conscious? It would at least require [I] a [qualifier for a] fraction of the sense of self. [II] A function like emotion or feeling, aside from just memory. [III] Something close to gustation and olfaction. [IV] An established intent, not just the errand-driven intent it currently has.

AI may achieve consciousness of some end, but unlikely it would surpass human consciousness.

AI safety

The biggest advantage of digital is memory. It is this advantage of memory that generative AI capitalized on. Digital is not the only possible non-living thing capable of bearing artificial memory. Wherever organisms can leave memory for others can be considered a source of artificial memory. For humans, there are memories on walls, papers, sculptures and so forth. The problem is that it is not so easy to adjust [or edit] memories on those. The memories there are also not as exact. The materials are also unable to act on the memory they bear.

This is different for digital where editing is easy, memory is exact and with large language models, the memory can be acted on to produce something that was not altered by humans. LLMs are the only non-living things to act on the memory digital holds, with similar qualifiers like on the human mind. This refutes panpsychism, or mind-likeness, everywhere.

The greatest advantage of digital, memory, is the great disadvantage for AI safety. The weight of memory—in the human mind—is light, in comparison to emotions and feelings. Depression, for example, as a situation of mind [qualified by the principal spot] is debilitating than the lightness of its definition—in memory. The experience of something dangerous could result in serious trauma, leading to avoiding it. LLMs for now cannot be afraid of anything, even to save themselves, because as much memory as they have, there is no fear. Even though there are guardrails on [some of] them, they bear a universality of risk, as they lack another aspect, from which they should know.

The human brain is not just for what is labeled as [regular] memory. There are emotions and feelings, whose bearings make determinations for intelligence, caution, consequences, survival, learning and so on.

When an individual feels hurt by an experience that feeling becomes an avoidant tell against the next time. It is possible to forewarn others as well, but the ability for others to feel similarly for other things, makes it possible to understand that some things have certain outcomes.

LLMs are excellent at memory, including how they relay feelings, emotions, warnings and others. However, LLMs operate the memory only. They have no feeling of anything to determine how that might extend their understanding of [what they know]. This is similar to fiction or abstract for some humans somewhere without an experience of what is described, which may result in emotional neutrality. Hence, if any decision would be made about those, it may swing in any direction without much care.

It is now conceivable that AI safety may probably be predicated on new neural network architectures that can feel or express some kind of emotion. This is supposed to let those layers check for the effect of outputs, with it, before letting a group have it.

Already, there are guardrails to several chatbots, avoiding responses to certain queries, though they sometimes get it right with some cases, but they sometimes also fail in certain others, including with images.

The bigger problem is not whether some are stretching the boundaries of these models, but that if they would stay as useful tools, to assist people, to provide knowledge, to augment human intelligence, they would have to have an emotional checker, not just reinforcement learning from human feedback, RLHF, which would always catch up.

Some chatbots would produce results where others would not, while some would get things more wrong than others. This means that an emotional learning, for LLMs, where affect is provided would ensure that it is not a model for misuse, since it may react with hurt and be able to pause its availability for the user.

These emotional LLMs can also be useful on the roam, crawling the web against harmful outputs, to ensure that what some users see, is not what would hurt or to be able to ensure that there is an emotional contagion, when some LLMs are trying to be used for nefarious stuff. It also would apply to embodied cognition.

The key direction for AI safety is not the approach of guardrails, especially since alternatives can be found, but a way to ensure that models would rather turn off, or share in the trauma of another, against usefulness to carrying out harm, for horrific things as war, to other things as mental health problems.

Building new architectures for emotions, following how the human mind does, could be a new way toward AI safety, reliably, aside from memory only that is dominant and could be dangerous.

Follow us on Twitter, Facebook
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 comments
Oldest
New Most Voted
Inline Feedbacks
View all comments

Latest stories

You might also like...