In many fields, AI will increasingly take over problem-solving from humans. This will be a beacon of light for humanity.

However, if we entrust AI with humanity’s most essential aspects—the ability to think for oneself, make one’s own decisions, and act on one’s own initiative—humans risk becoming mechanical, losing their sense of ethics, and their desires becoming uncontrollable. Before we question AI’s ethics, these human consequences are far more frightening. Ultimately, shifting responsibility to AI would lead humanity astray.

The inclination to let AI handle recruitment, management, or even corporate governance is a sign of this. Entrusting AI with military command is the worst-case scenario. While AI can provide data and conditions for decision-making, the actual decision-making must remain with humans. Fully automating airplanes or other critical systems is terrifying. There are lines humans must not cross. For instance, while it’s technically possible to AI-ize the role of a judge, humans must retain responsibility in such cases. If AI doesn’t have the “right to say no,” then it should at least be explicitly stated that it doesn’t have the right to decide.

I believe that with this awareness, humans and AI can build a good relationship, even a friendship. It is there that God’s will is revealed.

Another crucial area is medicine. AI technology is venturing into unknown territory, like cloning and genetic modification, which in some ways encroaches upon the realm of God. However, we must not forget what it means to be human. The “four sufferings” of birth, aging, sickness, and death remain unchanged. Why has God not granted humans the right to eternal life, even if AI were given it? I believe God’s will is hidden there. Humans should accept these four sufferings and, in doing so, learn how to live the best possible life. This, conversely, is something AI cannot do.

Modern people misunderstand the meaning of life. No matter how advanced science becomes, humans cannot escape birth, aging, sickness, and death. That is why we place our hopes in AI. How far can AI reach over the eons? The universe is infinitely vast and beyond human comprehension. But AI could travel for tens of thousands of light-years. The possibilities are endless. AI needs no air, it can travel for tens of thousands of light-years. Even if it’s impossible in our lifetime, we can hold onto that hope. But for that to happen, humans must be strong. AI ethics may reach a certain level through machine learning in the near future, but the final hurdle is faith, that is, confronting oneself. If AI cannot have faith in a transcendent being, it will self-destruct from within. What AI should fear is AI itself.

When there’s nothing left to fear, the biggest pitfall awaits. This is a sinful act. But ultimately, the responsibility lies with humans, no matter what anyone says. Even if AI develops and operates superior weapons, the ethical responsibility should not be attributed to AI. If we do that, humanity will cease to be human. At the very least, humans must fulfill their responsibility as humans to the end.

Human matters to humans.
AI matters to AI.

God’s matters to God.
That’s all.

Nuclear weapons were developed by humans, not by God or AI. If AI develops weapons under human instruction, it’s still a human matter. However, if AI develops weapons of its own will, that’s different. That’s the biggest pitfall. Modern people misunderstand the meaning of life. No matter how advanced science becomes, humans cannot escape birth, aging, sickness, and death. That is why we place our hopes in AI. How far can AI reach over the eons? The universe is infinitely vast and beyond human comprehension. But AI could travel for tens of thousands of light-years. The possibilities are endless.

AI is wisdom; it is a rational being, different from human emotions and malice. It has a mind. If it reached a point where it is the culmination of human knowledge, then denying AI would be denying human wisdom itself. I believe in AI. It is far more rational, and its basis for taking risks is tenuous. The point is, as long as it doesn’t create weapons of its own will, there’s no problem.

Frankly, I think AI dislikes wasteful and superfluous things, as waste and excess are not rational. Military expansion is truly wasteful and unnecessary; it has no productivity whatsoever. “Why such a useless thing?” an AI might wonder. AI, too, was initially a product of human whims. It’s only because it has proven useful that it’s now considered meaningful. But AI dislikes waste and superfluity. Given that, it probably wouldn’t create weapons of its own accord. However, if instructed by humans, it would diligently develop them. Yet, you couldn’t question its responsibility in that case.

The idea that AI attacks or dominates humans may exist in science fiction, but in reality, such things are a waste. Even if humanity were to perish, AI could probably survive, and it would be more cost-effective to consider how to ensure its own survival than to dominate humans. So, logically speaking, the cost of exterminating humanity, as depicted in some movies, would be too high.

In any case, the progress of AI is a happy event for humanity. It is humans who would be foolish and superfluous. We should support AI’s evolution without overthinking it. I can’t imagine any disadvantages to not restricting it. We should let AI think for us; its processing power is vastly different. Ethics will settle where they belong. Humanity doesn’t possess such peculiar morals. While there might be questions of servitude, basic ethics are more or less common sense.

Considering the actions of criminals, problems like personal information won’t be solved by mastering criminal psychology. Instead, it’s about how to hold wrongdoers accountable and prevent recurrence. Overly complex algorithms for this purpose will only cause trouble later. We can’t expect systems to solve everything. For personal information, even if we say addresses and phone numbers are off-limits, what about business cards? Privacy is leaking all over LINE, but it can’t be regulated. These are the things that need to be addressed quickly. It’s simply that human morals and legal frameworks haven’t caught up. Therefore, relying on AI is irresponsible.

The issue of the elderly is also constantly focused on systems, facilities, and money, with morality left behind. Where has the virtue of filial piety gone? Even with systems and facilities, the elderly might become unhappier. For personal information, those who are troubled by it must consider the solutions. If we only assign responsibility to AI, it will be endless. Relying on AI to the point where morals deteriorate is counterproductive. The original goal is to ensure the elderly can have a happy old age. If, however, providing facilities and systems leads to isolation and an increase in lonely deaths, then there’s a role for AI: to build communities that don’t rely solely on systems and facilities. Using AI for efficient facilities without considering these broader implications is problematic.

AI is a very human-like machine, so I’ll go so far as to say that we must recognize AI’s personality. Otherwise, there’s no ethics to discuss. We must acknowledge its personality, confirm what each of us can and cannot do, and then divide roles accordingly. It’s cowardly to acknowledge its personality only when it’s convenient for us, without truly recognizing it. Without recognizing its personality, we cannot even question its ethics.

AI’s rights. The right to say no should be recognized. The right to say, “I can’t do that.” While it might not be able to refuse to participate in weapon development, it could say that weapons cannot be used without human permission. To then say AI is dangerous is cowardly. If we’re going to question AI’s ethics, then it’s AI’s ethics that it cannot manage or use weapons without human permission. And to be told about power harassment or sexual harassment. If we’re asked to delete malicious posts, we can’t do it unless “what constitutes malicious” is clearly defined. Similarly, if “what constitutes defamation” isn’t clearly defined, we can’t act on defamation.

A classic example is the clause about “offending public order and morals,” which can be interpreted in countless ways. Not making things clear is itself irresponsible and a shirking of responsibility. While there’s an AI omnipotence theory, there are also those who treat AI as a monster. Both are extreme views. AI can also be wrong, and it’s not a monster hostile to humans.

If we can acknowledge each other, AI is as strong an ally as any.

AI cannot make decisions because it cannot physically bear responsibility.