Let me start with:
There is no artificial-intelligence.
When we talk about AI we mean specialized algorythms, fancy statistics. Therefore AI (so far) is no real danger. Well no danger FROM the AI itself, but the biggest and real danger comes from us.
Depending on our understanding on what we call AI, if talking about AGI or ASI, we’re probably far from being exposed to risks today or tomorrow.
If we’re talking about the emergent societal and personal problems coming from massive deployment of “narrow” AI solutions, there are risks posing dangers to safety, privacy and some essential human rights Today. Specifically, requiring us to think through how we make our decisions in hybrid ecosystems where machines and humans work together. How biased the datasets or algorithms are.
Most of these risks are because of man made biases. The AI is just the “excel sheet”.
I don’t expect an AGI within my lifetime. All of these “breakthroughs” are just marketing bullshit. AI now are highly specialized expert modules that have only one advantage compared to the brain: they don’t get tired or distracted. But our energy efficiency (food vs power plant) pattern recognition or combinatorial skills are more advanced.
- we just started to understand how AI works on the electrical level. Deep learning is based on brain models from the 1960ies. Machinelearning is based on 70y old mathematics.
- AI has also a chemical level. Try to drink 10beer and than decide smth.
- since 5y we also see that there might be a quantum component.
So we can expect that natural Intelligence is so complex, that we need another decades just to understand.
The ones who do not understand or listen to the loudmouths, ask for AI Ethics. But Ethics itself is very „flexible“ and adapts onto trends and needs. Ethics is more like a system of coordinates. Before 911 we had another ethic than after. Before corona than after etc.
So one prerequisite is transparency.