Artificial Intelligence concept
CNews
CNews Interview

AI Development Must Not Fall Into the Hands of Destructive Leaders

Digitalization and the growing power of artificial intelligence may have consequences for humanity quite different from what it desires. Experts believe that modern AI still has many shortcomings, the key one being its inability to make balanced decisions.

Artificial Intelligence Can Bring Not Only Technological Wonders

CNews: The digitalization occurring in the global market has raised several important questions for humanity and business. For example, many experts say that artificial intelligence will only take on routine business tasks, leaving people with truly creative tasks. Do you agree with this?
Olga Lukina: I would like it to be exactly that way. The words of AI experts declare the idea of progress and opportunities for human development. I certainly agree with the idea, but what will happen in practice is an open question. Being an expert in the field of emotions, thinking, and behavior of leaders, I am cautious. Because it often happens that they consciously want to do one thing but do another, and they are not always ready to take responsibility for the consequences — sometimes they are not even ready to acknowledge the very fact of the discrepancy between what was planned and what resulted.
People remain people, and as long as their internal conflicts, suppressed childhood fears, grievances and anger coexist with them — their unconscious motives may run counter to their rational intentions and goals. Already now in the field of AI development there are things that quite alarm me.
AI and human consciousness
CNews: The use of artificial intelligence today has many purely theoretical and quite predictable consequences. Many, for example, predict serious problems with rising unemployment.
Olga Lukina: I think such a danger exists. And first of all, it will affect people in less highly qualified professions. I am sure that a top-class lawyer or doctor, no matter how AI develops and learns, will not lose their demand. Because their successful strategies are based not only on rational algorithms, but also on intuition, the ability to subtly sense what is happening with their client on an emotional and physical level, and a systemic understanding of the surrounding context. And a robot will never be able to do this.
Therefore, when I train or consult young people, I strongly recommend that they invest their efforts in their own development today. To be in demand in the future, people need not only deep professional knowledge, but also to actively develop their emotional intelligence.
Speaking about the increasing penetration of AI into our lives, I think that job loss is perhaps the most harmless consequence. I am sure that politicians and economists will be able to find a solution. AI evokes in me not only anticipation of some future technological wonders, but also anxiety, because the potential impact of artificial intelligence far exceeds the potential of atomic energy. If AI falls into the hands of destructive leaders, we may get things significantly more terrible than job cuts.
I think that developers in the field of atomic energy, learning about the consequences of the use of atomic weapons in Hiroshima and Nagasaki, were emotionally and spiritually unprepared for such results of their scientific and technological research: hundreds of mutilated bodies, thousands of agonizing deaths from radiation sickness. I am sure that awareness of the consequences made the lives of these scientists unbearable. I would like to draw attention to this history; it is necessary to draw serious conclusions for ourselves, to work on mistakes, to prevent a repetition of what has already happened once.
The future of AI and human collaboration
CNews: Let's try to explain what emotional intelligence is.
Olga Lukina: There is a widespread view that emotional intelligence is a person's ability to feel and understand their emotions and manage them, as well as their ability to feel and understand the emotions of other people and manage them.
Unfortunately, this popular definition is rather vague and cannot be considered complete; it can mislead people. Remember Ostap Bender: he perfectly knows how to manage the emotions of the people around him, subordinating them to his adventures and momentary commercial interests. According to the existing definition, we can say that Ostap has high emotional intelligence. He is charming and perfectly reads the motives of other people, but for him they are only objects for manipulation. And any manipulation is a power psychological game.
A person with a mature and whole psyche will see through both the manipulator and his charm, which covers selfish intent, in no time, and will not allow himself to be used. The definition of emotional intelligence must necessarily include a criterion of a high level of awareness, that is, a person's understanding of their genuine feelings, not their substitutes imposed by someone in their distant childhood. The definition of emotional intelligence must also take into account a person's mature value system, which determines choices for which they are ready to take responsibility.

IT Specialists Are on the Cutting Edge of Social Responsibility for the Future

CNews: You work a lot with talented IT specialists. How are they doing with emotional intelligence? How can this affect the development of AI?
Olga Lukina: IT leaders are very extraordinary people. From early childhood they are driven by a powerful instinct to explore the world, to find answers to questions about how things work and why. Such children very much need the protection and love of their parents so that their brain fully reveals its potential. Encountering unreasonable or inconsistent behavior of parents, these amazing children were forced to give their strength not to development, but to finding an opportunity to adapt.
No one has canceled the role of a person's unconscious life script. This matrix, deeply embedded in the psyche, determines a person's choices. They unconsciously reproduce in adult life the same pictures and plots from which they once suffered very much as children. This applies to their attitude toward themselves, relationships with people around them. The script steals inner freedom, spontaneity and a sense of emotional security inside. Intellectual and interesting work can become a lifeline for these people for many years, a place where they escape.
But this is an infantile way of psychological protection from emotional distress; it is not perfect and very expensive.
Technology and human emotion
I always admired the capacity for work and sincerity of IT professionals already in the process of psychotherapy itself, the insight of the brain, which grasps the essence of the new and then works on its own. All that such a person needs to move forward is an opponent who is not inferior in systematicity and speed of thinking. They simply will not work with another psychotherapist. Their psychotherapist must do what they cannot do themselves: feel the child trapped inside them and call them to life.
I believe that IT professionals can do something that will qualitatively change our world and turn it into a better place. They will discover possibilities that most of us do not even suspect. And with all this, I have anxiety that the ideas and brilliant developments of these people will fall into the hands of destructive leaders from the worlds of business and politics, and it is they who will become the beneficiaries.
In my book there is an episode in which an IT genius who brings billions to the corporation cried silently, like a child, in my office, telling me about a humiliating situation in which he could not defend himself. His boss, being in a bad mood in the morning, was so strongly dissatisfied with the pace of the most complex developments of the IT department that verbal insults to his IT director were not enough for him. He wanted to crush his human dignity by showing his unlimited power. He demonstratively wiped his nose on the papers that the IT director brought, who had prepared them through several sleepless nights.
This was a light version of the external manifestation of destructive leadership, which is the biggest threat to all of us. All this has already been in history. We all read about the case at a banquet in honor of the first test of the hydrogen bomb, when a general authoritatively put down Andrei Sakharov with a bawdy anecdote after he proposed a toast "so that nuclear weapons would never be used against people!"
Ethics in AI development
IT professionals working on AI do not have the right to allow someone to treat them this way. They do not have the right to treat their developments as fascinating children's toys. And they do not have the right to use work as a way of escaping from unresolved emotional problems. Because the potential of what they are doing is too great. There is a hunt going on for the brains of those who make breakthrough innovations. The key question becomes the responsibility of IT leaders for what they do and how it will be used. This is a question of a person's integrity, their professional and human ethics. And it is on the very point of the needle.
CNews: You cited atomic energy and the negative consequences associated with its use as an example. What threats does artificial intelligence itself carry? Do you have any examples?
Olga Lukina: The answer to this question is not far to seek. Cautionary dystopian and science fiction novels are already coming to life today. I get the feeling that Google is noticeably transforming from a democratic developing platform that opened up opportunities for us to choose the most diverse and unbiased information, into an environment in which the influence of destructive type leadership is increasingly noticeable.
For example, they believe they can save money on competent intellectual content managers, strive to exclude them from the process. The ability to make decisions is transferred to artificial intelligence, which is configured in such a way that it often does not pass through its sieve non-standard and original content capable of developing people and leaders themselves.
The coronavirus pandemic became a provocateur and accelerator of this process: people went into quarantine, and the right to decide on the acceptability of content on YouTube de facto passed to artificial intelligence.
What this turned out to be in practice: at the height of the pandemic, we posted episodes of my documentary series "Leaders in My Office" on YouTube. This film is precisely about leaders who came to analytical therapy to work on themselves; about how they went through their difficult path, returning to themselves the ability to love themselves and people; how they painfully pushed through their painful ambitions; how they learned to trust again, how they cleared the path to their self-realization to go to their real goals; how they turned from tyrants and rebels into free and creative leaders. The material is not banal.
YouTube's artificial intelligence did not let this content through! We filed six appeals, but nothing changed. Rare live managers practically did not get in touch, writing amazing texts like: "We are very sorry, we ourselves see no problems in the content, but it does not pass the robot, and we cannot do anything. Please write another appeal. We really hope that the system will let the film through." My amazement grew: "What is the purpose of another appeal? In the hope of winning mercy from AI?" As a psychiatrist, I realized that this is as meaningless as looking for prudence in a psychotic.
A mini-dystopia today? That is, now robots are making decisions about the acceptability of content for YouTube, and a human can no longer intervene? Well, then this is an "SOS" signal, and we are going to receive only mediocre, averaged and directed content. Probably, this suits the people making decisions. But does this suit the same IT leaders who built the platform, thinking about the free development of people?
The balance between AI and human judgment
CNews: Do you understand what the artificial intelligence did not like about this material?
Olga Lukina: I believe that artificial intelligence catches "unacceptable" words. For example, telling about my path, I use, for instance, the phrase "psychotropic drugs." Or, talking about my client, a businessman from the 90s, I mention the features of that time. How was business built with us then? This is documentary. Artificial intelligence catches the word, conditionally, "alcohol," and does not let the content through. It recognizes words, but does not understand meaning. In English, this word ("sense") has meanings such as "sensation," "emotional and intellectual meaning."
Everything would be much simpler if these settings were constructively and creatively managed by leaders who bear responsibility, including for the imperfection of artificial intelligence. The effectiveness of leaders ends when they begin to save on human reason, and artificial intelligence begins to make its own decisions, leaving the concept of "meaning" outside the brackets.

Many Fear Automation

CNews: Why do you think employees in many companies greet the appearance of AI with some apprehension? Is this a manifestation of irrational fear of objects acting like humans? Or is it a matter of banal fear of losing their job?
Olga Lukina: Both of these fears are present. But working with irrational fears is not the competence of a manager. This is the competence of a psychotherapist. If, of course, a person applies with such a request. But a mature state thinking about its people should be ready to offer an alternative to people being released.
CNews: Should any steps be taken aimed at "humanizing" artificial intelligence?
Olga Lukina: We still do not fully understand what human intelligence is and how the human brain functions. Will we be able to create artificial intelligence similar in its functions to the brain? The consciousness of the smartest people on the planet continues to be destroyed by diseases: Alzheimer's, Pick's disease, malignant forms of schizophrenia. And we are powerless. Everyone can only pray that this fate does not overtake them.
Many intellectually strong people behave neurotically, damaging themselves and other people. Love and spontaneity are not subject to digitization. What then should be understood by "humanizing" AI? Perhaps an attempt to algorithmize neurotic sequences? I do not know how to answer your question.
CNews: It seems to me that such a goal is not currently set; this is considered a relatively unattainable point.
Olga Lukina: If so, then artificial intelligence should remain an applied tool. And at all those moments when a decision is necessary, there should be a responsible and professional person next to the robot, capable of making these conscious decisions.
Dr. Olga Lukina

Dr. Olga Lukina

Business Psychotherapist

Originally published in CNews

Explore More

Discover Dr. Lukina's media appearances, videos, and publications.

Back to Media