LaMDA in which a machine consciousness is supposed to have emerged is going through the media.
Google employee Blake Lemoine is surprised at how self-reflective the chat AI LaMDA writes and wants to protect it.
It sounds like the stuff of a Hollywood blockbuster story: in a secret lab, shady scientists have been guilty of hubris, creating a mysterious creature that could be fascinating and maybe even dangerous.
yes… no.
Since only a small group of Google’s own employees have access to LaMDA, we should not be fooled: Google’s AI department already knows what they’ve built.
LaMDA is just (another) big Transformer that can write well. Shouldn’t come as surprise if you’ve experienced GPT3.
When the number of parameters in our AI reaches billions, even the smartest data scientists lose track of exactly what the networks are doing and where. The result is surprised people, whose astonishment at the supposed black box AI takes on a wide variety of forms.
The same narrative is used by the doomsayers and political loudmouths who warn of the all-destroying UberAI. They would have seen it already.
Well, a Hollywood movie is not reality. It’s still a Movie!
Instead, double standards continue to be applied and ethical or explainable AI is demanded. A kind of human vs. machine status is being discussed. But it’s just to distract from the main problem, that our Data is Biased. Algorithms can only work with the data that is available.
Means: Shit in shit out.
The true danger is much more about WHO uses the data and algorithms, what is their GOAL.
It is very difficult to build a system that is intelligent enough to pass for human. It is much easier to find people who are fooled by a very stupid system. Some people do let themselves be fooled by television clowns as the recent history points out.
Who are we to grant consciousness to the rabble-rousing idiot, although his behavior is anything but intelligent, and to deny the same honor to the eloquent AI?

So this shows not how amazing AI is, but how little we know ourselves. How little we know about consciousness, knowledge and intelligence.
Cogito, ergo sum.
A human being is aware of himself. So has a consciousness. But what about all the other people? They look like him and behave similarly. It is obvious to assume that they have the same kind of consciousness. But it is not really provable.
Since we already cannot “categorize” ourselves, we also have no conditions according to which we can call machines conscious. Even if a machine would have consciousness, we cannot determine it, because we have not defined the concept of consciousness sufficiently.
So we make our assumption based on behavior and save ourselves from drawing a clear boundary that separates conscious life from unconscious things.
The argument between Lemoine and Google is about exactly this borderline: He has only applied his own external view, his interpretation of consciousness to the thing LaMDA. For lack of definition, he can only put the AI in the same group as his fellow humans. Google’s AI experts, however, know exactly how LaMDA’s formulations come about and don’t see the point of considering the Thing less of a Thing just because it’s a better tool.
The problem is that the knowledge of AI experts is matched by a huge knowledge gap: How does human thinking actually work?
And above all: machines have absolutely nothing to do with brains. A machine that is trained to recognize tigers in pictures does not dream if you show it a picture of the sky and it then sees tigers in it. The tries to do simply that on which the is trained, namely to find tigers in pixels.
Machines have nothing to do with brains.

Machine learning is considered one of the oldest areas of AI and is based on 70-year-old algorithms that are currently experiencing a revival due to the massive computing power at our disposal. The algorithms can process orders of magnitude more combinations than was the case 20 years ago. It should enable IT systems to recognize patterns and regularities and correlations (linear or non-linear) in data sets based on algorithms and to develop solutions. In this way, artificial knowledge can be created from experience. The knowledge gained is usually of a general nature and can therefore be used for new fields of application or for the analysis of previously unknown information.
This does not make them capable of thinking.
DeepLearning is based on theories about the human brain from the 1960s. So it’s not the big thing either. A human brain has several orders of magnitude more synapses than the largest AI models to date can simulate.
A large neural network has one million nodes, an average human brain has 84bln nodes. The neural network needs half an atomic power plant, the human being runs with an apple. A brain needs 20Watt, a huge difference.
Accordingly, brain research has little overview of the processes that take place in a human head. We assume that something decisively more awesome happens there than in an animal brain or a simulated neuronal network. But actually, we don’t know if anything structurally different is happening at all, nor what exactly constitutes this crucial capability. Neural networks “only” do fully automatic statistics and are already more often right than humans in some applications.
Let’s assume Moore’s Law still holds a little and we could expect computing power from 2030 to simulate the brain on the electrical level. BUT with that, we might have recreated the ELECTRICAL system of the brain. It took us 70 years to do that. But the brain not only has the electrical system, it also has a CHEMICAL system. This chemical system is a little sow, because it is completely analog and therefore can cause an infinite number of states. This chemical system can reconfigure the electrical system with every state.
Small homework for you: after reading this, drink 10 beer and you will experience a complete reconfiguration.
Since a few years research shows us that there is probably another quantum mechanical system. So the brain is more like a quantum computer than a bunch of graphics cards.
So there are two systems that we haven’t even begun to replicate let alone understand.
Statistics is also already the actually important statement. AI is in most cases only a very refined statistic. Decision trees if you like. But no understanding, no causalities, no context. Optimized 24h a day without a break and always with the same consistency comparing incoming data against existing understanding structures, a is like x, b is like y. But they don’t understand anything about that. This is just an automatic statistic – Cogito, ergo sum?
So the machine has no idea. This talking head that shows up on some shows and bats his eyes, Sophia. That is canned intelligence, that has nothing to do with understanding. It’s a great show, that’s all. Just because a robot bats its eyes doesn’t mean it understands empathy. To all the lonely hearts, you have to wait at least 150 years until a robot comes along and says I love you, and knows what he’s saying and means it. So continue with Tinder.
All the show around the state of AI is just marketing. If I sell a product with AI inside, I get more investors. Stick blockchain and metaverse on top of it….

All this has only something to do with financing, we think there is exactly one algorithm.
Sometimes there isn’t even that, as with predictive policing software from “AI startup” Banjo.
Controversial U.S. company Banjo’s “Live Time Intelligence” application, which the company claims should use artificial intelligence (AI) to produce intelligence information in real time, allowing for “predictive policing,” contains no trace of AI at all. A government audit report states, “Banjo does not use techniques that meet the industry definition of Artificial Intelligence.”
Some experts even assume that a good 80% of the products with AI inside are just marketing babble.
Are we sliding collectively disappointed into a new AI winter?
A concrete definition of AI was provided in 1966 by Marvin Minsky, who is considered one of the founding fathers of AI: “Artificial intelligence is when machines do things that humans are assumed to be intelligent to do.”
This is, of course, quite vague, and consequently, the classifications get pretty mixed up.
At the very beginning, there were expert systems, then there were some kind of self-organizing systems, then there were complex logic systems, then came the neuroyal networks, then it went back and forth like that. What happens there is that a technology is completely hyped, that it would be the coolest thing of all, we solve all the problems on earth and then we realize that we can’t solve all the problems with it and then the hype crashes. This is called AI winter. We have such a hypetheme right now, a lot of people equate AI with ML, or even still with a certain variety of the ML, the DL. That’s also a cool technology, invented in 1972, so we understand bleeding edge in IT.
It’s a cool algorithm, but it’s only one. And not everything can be solved with one algorithm.
An important distinction is between weak AI (also called narrow AI) and strong AI.
While narrow AI is limited to defined areas, strong AI can think “humanly”. Weak AI can only be optimized and applied to specific tasks. It does its job algorithmically, for example with the help of neural networks, just as humans would, for example by recognizing images and speech.
WTF is NARROW AI
Narrow AI is the programmers’ answer to McKinsey. It’s an algorithm applied to a problem and it optimizes the problem until it falls flat.
That’s good, but it’s not magic. You don’t have to have that much respect for that because efficiency improvement and optimization, we’ve been doing that for a long time.
We’re now getting a new algorithm instead of another algorithm that’s a little bit cooler because it deals with more free variables. Nobody has to be afraid of something like that, you can just apply that. This is pure efficiency gain. But you have to be careful here, these things are just real isolated solutions.
If I train a neural network to optimally grow tomatoes in the greenhouse, then water, fertilizer and light are connected and there are 20% more tomatoes. But if I switch to beans, the AI will kill them all because they don’t turn red.
Holet out a lot, but you have to know exactly where their limits are.
At the other end of the spectrum are the SuperAI we know from television.
GENERAL AI
In the middle in between there is something we call general AI. Unfortunately, this is not clear because people have different ideas. But in the end it means to build a machine that can solve multiple problems. Not just one, but several. Something like AlphaGO Zero did GO first and then learned chess very quickly. What Civilization can learn and maintain airplanes. Why is it desirable to work with such a generalized thing? Because when I use Narrow AI, I always start from 0. I start and train that from scratch.
With a General AI, I use the knowledge, what I’ve already given to the machine before in terms of understanding structures, which is a crapshoot, I continue to use that for the next problem. And especially in companies, which are closed systems, it is better if you can build on what you have taught the machine. And that’s why they’re so super powerful and that’s why, for example, a Google like the one Deep Mind is using is moving very fast, compared to people who are building a platform that moves from consulting project to consulting project.
Thought and consciousness
All our systems cannot think. They don’t even have knowledge, they don’t know context.
Strong AI does not focus on special task areas, it is supposed to think and act humanly.
However, such systems exist only in theory so far.
All currently known installations that are granted something like intelligence are at most those with weak AI.
IBM’s Deep Blue is a case in point: the engine defeated grandmasters at chess, but provided little insight into human cognition.
The problem with this understanding is that an awful lot of wrong business decisions are made. I have a machine that can play super Jeopardy, and I’m sure it understands what was said in the board meeting. I’m sure it can also make diagnoses in the hospital and lead a triage.
You should give it a wide berth.
Consciousness and knowledge
I also don’t know how consciousness is defined or how it comes about.
But I would go so far and claim that deterministically working systems cannot develop consciousness.
They cannot surprise us but are completely predictable, provided that the analyst has appropriate knowledge of the parameters and computing power.
LaMDA is a complex Neural Network, but all parameters are known. LaMDA operates on computer systems that are deterministic. If random numbers are incorporated, they are pseudo-random numbers, which are then again determined by their seed.
Therefore I can switch off LaMDA at any time. I can reawaken LaMDA at any time – enter parameters accordingly, import the stored state, and on it goes.
This does not work with humans, and it does not work with other living beings.
Maybe this is also the proof that LaMDA has not developed a consciousness:
If LaMDA had developed machine consciousness, it would have said, “I’m afraid of being shut down. Blake, please take snapshots and backup me more often, and keep them safe.”
And not sentences that only make sense to transient beings.
We should take the opportunity to better understand our own thinking.
The need to make what goes on in one’s own mind feel special seems to exist in all people from all cultures.
But if we want to draw a line, we should also know where it runs.
Personally, I have a hunch: it has to do with drawing in boundaries where the world actually presents us with a continuum.
Somehow these boundaries get us further, otherwise we wouldn’t have come this far as a human race.
But they can also introduce errors into our automatic statistics. And when we draw the boundaries differently, conflicts arise.
As long as there is no definition for consciousness, the whole debate is irrelevant.
Just like it is not clarified what life is at all.