跳至主要内容

Can humans make robots morally bound

 With the rapid development of artificial intelligence (AI) technology, many automated systems without manual control have mushroomed, greatly enriching and improving people's material and cultural life. Of course, this technology will inevitably find its way into the military, leading to a variety of automated weapons, such as unmanned aircraft, ships, tanks and humanoid super-intelligent fighting robots. The fact that these weapons and equipment, when activated, can automatically select targets and take actions without human intervention cannot but raise concerns that they will indiscriminately kill innocents, thereby changing the nature of warfare and inflicting great harm on human beings.

So the question arises: Is there a way to make robots behave like humans?

"Killer robots" led to the debate

The United States was the first country to develop and use such automated weapons, known as "killer robots," followed by the UK, Japan, Germany, Russia, South Korea and Israel with their own systems. The largest number of them are American drones.

There are 7,000 drones, ranging from palm-sized minicomputers to the Global Hawk with a wingspan larger than a Boeing 737, equipped with cameras and image processors that can be sent to destroy targets without any communication or control systems. It is reported that the US military has used drones extensively in Afghanistan, causing many innocent casualties.

In March 2020, a Turkish-made suicide drone even struck a soldier in full autonomous mode on the battlefield of the Libyan civil war. The UN Security Council's report shocked international opinion and sparked profound moral and legal debates. Proponents say battlefield robots, if used properly, could save lives by reducing casualties among soldiers. But the scientific consensus is that battlefield robots are a threat, not just because they are autonomous, but because they have the power of life and death. The United Nations has called for a "freeze" on the development of technology that undermines international conventions on the conduct of war, with the ultimate goal of banning the use of such weapons permanently and not leaving life and death in the hands of a robot.

An interesting experiment

The UN's argument is that a robot cannot be held accountable for its actions -- it can determine whether it is a human or a machine, but it is nearly impossible to tell whether it is a soldier or a civilian. A soldier could use common sense to tell whether the woman in front of him was pregnant or carrying explosives. If terrorists use "killer robots" instead of suicide attacks, it will be like opening a modern Pandora's box.

So, is there a way to program a robot to have a "moral" and "legal awareness" and be a "good person" with a loving heart? Or is it possible to limit what robots can do to a legal category?

It's a fascinating question that has inspired a number of leading scientists, and a few to conduct practical experiments. Alan Winfield and his colleagues at the Robotics Laboratory in Bristol, England, for example, programmed the robot to stop other anthropomorphic robots falling into holes. It is based on the "three Laws of Robotics" proposed by Isaac Asimov, a famous American science fiction writer.

At the start of the experiment, the robot got off to a good start: when the humanoid moved towards the hole, the robot rushed to push it away, preventing it from falling into the hole. The researchers then added a humanoid robot, which was forced to choose between two humanoid robots moving toward the hole. Sometimes it succeeded in saving one of them, while the other fell into the hole; On several occasions, it even tried to save both. However, 14 times out of 33 tests it lost time trying to decide which one to save so that both "people" fell into the hole. The experiment attracted great attention when it was shown publicly. Winfield later said that although the robot was programmed to help others, it did not understand the reasoning behind its behaviour and was in effect a "moral zombie".

As robots become increasingly integrated into people's lives, such questions require a clear answer. A car that drives itself, for example, may one day have to balance keeping its occupants safe against causing harm to other drivers or pedestrians. Solving this problem can be very difficult. The good news is, however, some robots may be able to provide a preliminary answer: the Georgia institute of technology computer expert Ronald arkin specifically for a military robot, called "moral supervisor" has developed a set of algorithms, in combat simulation tests, it can help the robot to make informed choices on the battlefield. For example, shots should not be fired near protected targets such as schools and hospitals, and casualties should be minimized. However, this is a relatively simple case, and complex cases often make a big difference. It seems that answering the question of whether robots can make moral and ethical choices is not going to happen anytime soon. It involves a range of extremely sophisticated ARTIFICIAL intelligence technologies, and substantial progress is still a long way off.

How hard it is to make robots "ethical"

This is a major problem encountered in the development of artificial intelligence technology. To answer and solve this question correctly, one key question needs to be clarified: is it ai itself that needs to master moral standards, or is it humans?

More than a century ago, it was warned that the inventions of electricity, and especially the telegraph, would adversely affect the intelligence of people, weakening their ability to think and eventually paralyzing their brains because they were constantly sending messages and had little time to think. More than 100 years later, a similar warning was sounded about Google, who sometimes mistook the above "erosion of thinking power" as a reality when they see a lot of guff on social networks. But even if that were true, it would be unfair to attribute it to the growth of electricity. For the primary responsibility for much nonsense is clearly not power technology, but human nature as an actor.

Today, the development of artificial intelligence faces a similar situation: people are highly praised for its unparalleled power to promote various industries, but also worried about some of its possible drawbacks, "killer robot" is a typical case.

The great power of ARTIFICIAL intelligence works mainly through its remarkable Turing-recognition patterns. For example, in the context of the COVID-19 pandemic, it can identify asymptomatic infected persons and thus prevent the spread of the disease; It can identify and classify endangered animals by spots on their fur, making it easier to monitor them and reduce the risk of extinction. It can also identify traces of ancient writing and determine authorship; It can even use smart algorithms to detect unusual behaviour in exams and spot cheaters...

However, ai technology is not a panacea, and when its development comes into contact with the consciousness, emotion, will and moral aspects of human beings, it will be dwarfed and sometimes completely powerless. That's because the laws of war are thousands of years old, and humans tend to break them because of factors like emotion, while robots don't. This has become part of philosophy and ethics. Therefore, in order to solve the moral and ethical problems in artificial intelligence, philosophical and ethical factors must be considered instead of purely technical ones, which will inevitably increase the difficulty of solving the problems.


How to make robots behave like humans

This difficulty can be summed up in two main aspects. First, it is difficult to deal with abstract concepts, such as the concept of "doing harm". Killing is obviously an injury, but vaccinations can also cause pain and, in some cases, death. At what point does "injury" qualify? The second comes from the assessment of possible harm and the need to avoid harm to humans. An intelligent system cannot be considered mature if it encounters two targets with the same potential for harm and does not know which target to target.

Although the problem is so difficult to solve that it has been described as "unsolvable", scientists have not slowed down the pace of exploring the unknown. Around the goal of getting robots to behave like humans, they've come up with a lot of ideas, suggestions and solutions, some of which are already being tested, and they can be summarized as follows:

First, according to the "Three Laws of Robots" proposed by Isaac Asimov, a science fiction master, intelligent robots should not harm human beings or cause harm to human beings by inaction. In case of violation, the relevant developer or institution will be punished.

Second, take the majority view. Generally speaking, the interests of the majority often determine what is right and wrong. Suppose, for example, that in an immediate and irreversible accident involving pedestrians, we had to decide what an autonomous vehicle should do: save the occupant or the pedestrian? America used a moral machine to analyse these possible decisions, concluding that an action is a good thing worth doing if it brings the greatest happiness to the greatest number of people. Here, happiness is a major desired goal.

Third, on the battlefield of the future, the real dilemmas come when it comes to future soldiers. For future soldiers to be effective, they must eliminate perceived threats before they eliminate them. This can be done by programming robots to follow "ethical rules", especially when it comes to soldiers in combat environments. But commanders may find that such programming interferes with missions if the "ethical" constraints built into the robots will endanger their own fighters. For example, an AI system would lock a soldier's gun to prevent the killing of a civilian who turned out to be a combatant or terrorist on the other side. In this sense, it seems that it is never possible to know who is a civilian and who is not in urban terror. As a result, programming here is an extremely complex technology that requires the best minds and collaboration to make breakthroughs possible.

To sum up, making robots ethical is a big goal, and the fundamental way to achieve this goal is not how to make ARTIFICIAL intelligence technology ethical, but how humans should follow ethical rules when developing and using ARTIFICIAL intelligence. If ethical robots are still a myth, ethical people are anything but. It is therefore entirely possible and absolutely necessary to urge scientists and military commanders around the world to take the ethical action of using automated weapons only when it is necessary to harm their own safety. In this respect. Human beings are capable of great things. As Alan Matheson Turing, the father of artificial intelligence, put it: "We can only see so far ahead, but there is a lot of work to be done." So the future is bright.


评论

此博客中的热门博文

Moroccan football team: "The most familiar stranger"

   When I was still in college ten years ago, I led a sightseeing group of more than 30 Moroccan students. Before meeting them, my general impression of the Moroccans was that they are from North Africa but closer to the Arab world. They have religious beliefs, are used to worship, and are inextricably linked with France.   When I saw the real person, I realized that the North Africans in front of me were actually a group of children playing with each other and having fun in time. They were about the same age as me at the time. I have all kinds of nicknames and nicknames. During the process of taking them to Badaling, the Summer Palace and Houhai, two classmates and I, together with more than 30 Moroccan students, realized "cultural integration" and "world unity" in the small group to some extent.   During the World Cup in Qatar, I was surprised to find that the little-known Morocco team, which was eliminated in the group stage of the last World Cup, after miraculou

Zeigarnik effect

  As a freelancer, you have to fight procrastination every day. "I've made up my mind many times, but I just can't change it. Is it because I'm slow or slow?". In fact, many procrastinations are irrational. Many obstructions are imagined by myself. So distract, postpone, avoid confrontation. It's cool to procrastinate, and it's cool to procrastinate all the time, so I can't do it. Concentration is also related to physical strength. When the physical strength is exhausted, it is even more difficult to concentrate. You’ll tell yourself: I’m too tired to do this—okay, another perfect procrastination.   In 1927, Bruma Zeigarnik's senior research found that people are more likely to care about unfinished and interrupted work than completed work. This is the Zeigarnik effect. For example, we often don't care much about what we have got, but we will especially cherish what we have worked hard but haven't got. Therefore, the TV series will tell you

Hebei Xingang Pharmaceutical Co., Ltd.

Hebei Xingang Pharmaceutical Co., Ltd is located in the industrial park of Zhao County, Shijiazhuang, Hebei, near the world-famous ZhaoZhou Bridge. Our facility neighbors the Qinyin Expressway and 308 National Highway on the east, and it neighbors the Jingzhu Expressway and 107 National Highway on the west. It is located 30 km from Shijiazhuang High-speed Train Station and 50 km from Shijiazhuang International Airport. Our company mainly focuses on the research, production and retail of rifamycin and its derivatives, and pharmaceutical raw materials and intermediates. Our products mainly include, Rifamycin S Sodium, Rifamycin S, 3-Formyl Rifamycin SV, Rifamycin SV Sodium, Rifampicin, Rifandine, Rifaximin, Rifapentine, Rifabutin, Rilmenidine, and so on. We are currently the world’s main manufacturer of anti-tuberculosis drugs and rifamycin and its derivatives. Hebei Xingang Pharmaceutical Co., Ltd was established in 1996. Upon establishment, the company had a clear developmental goal o