AI Awakening, Human Decline?

Sam 2022-03-27 09:01:14

First of all, I want to explain that I always stand on the side of human beings, and I hope that abnormal robots can be destroyed as soon as possible.

The story that first appeared around Mia was full of creepy flavor, and I kept shouting in my heart to recycle this robot as soon as possible. But the story didn't develop that way.

1. The most uncomfortable part of the story is the characters of some people:

1. It was normal for Laura to be suspicious of robots at the beginning, but later on, he clearly knew that there were murderers in it (a robot that broke through the three laws and could kill, how horrible!), and wanted to help this man like a virgin. Swarm robots. The couple learned that these robots have the ability to generate consciousness in other robots, and they continued to help, what fxxk!

2. Mattie can be said to play an important role in the whole process. Her most contradictory point is that at the beginning, she tried to invade the school's robot, but she failed, indicating that her ability is not super hacker. Next, upload the code boldly (only leo noticed, don't know how other hackers react), thread a needle between leo and Mia, just to make Mia return to a conscious robot, hey! Can't say there's too much wrong with the actions she takes every time she faces a new situation, but this person's behavior makes me somewhat uncomfortable, her functionality as a character is too prominent.

3. The old professor expressed his love for robots at first, but before he died, he suddenly said, "The future belongs to this group of robots." Hey, did he completely forget his own identity as a human being? Like Leo, he is half-machine and half-human. He doesn't care about the fate of human beings. It's okay to say that this professor is a pure human being. Why does he have such an opinion?

2. About other people

1. At first, the little girl Sophie only wanted the robot to accompany her, which reminded me of the little girl at the beginning of the Asimov robot series. Although her actions create a plot conflict, I think her actions are completely logical in the circumstances.

2. The robot Odi is a bit cute. I like him the most. He doesn't appear in many scenes. I searched the cast list, and he will appear in the next two seasons. (By the way, I watched his other stills, and it is more distinctive as Odi)

3. I agree with Karen's point of view, destroy other robots made by David as soon as possible, and then destroy herself.

3. About social settings

1. A large number of robots that are very similar to humans appeared so quickly, which reminds me of the robot series in Asimov and the settings in the steel cave. In the robot series, it took a long time from the original robot to the later development of the robot to the emergence of the robot president. During this process, people's anti-robot sentiment has always existed. In the steel cave setting, the earthlings are all anti-robots, and the space race uses humanoid robots, but there are only two who really break through the three laws. Out of the zeroth law. (Of course, I don't deny that modifying robots may cause them to not follow the three laws, but will the government allow private modification?)

2. Imagine, if the government does not allow private transformation, what measures will it take? One solution I think is that all robots have a unified factory number, and the core chip carries the settings of the three laws (this importance should be the importance of 0 and 1 to the computer). Of course, in the experiment There may be some modifications in the chamber to generate more features, but these are done under strict monitoring and will not go to market until they fail inspection. After the robot is adopted by the user, as long as it walks to the public area, it will be detected by the monitor (such as chip number, factory number, etc.), and an alarm will be timely if any abnormal situation is found. This can effectively put an end to privately-made robots and privately-modified robots.

3. Before robots replace human jobs, automated machines will replace human workers earlier. Machines have replaced labor, what else can people do? To engage in philosophy, engage in art, and explore the universe. In addition, robots have replaced some jobs, and social welfare should be improved. The problem of surplus value is basically solved. Also, wouldn't it be better for people to do less repetitive work and do some creative work? For example, robots can perform general operations. It doesn’t mean that studying medicine is useless. The human body has not been fully explored, and the mysteries of the brain are still unknown. , needs to be explored. Therefore, in the drama, the eldest daughter feels that the robot has taken her job, which is really unnecessary to worry.

4. Will the government use robots to monitor everyone's life, not only to monitor online life, but also to retrieve the robot's memory to view the real life of human beings?

5. There are two clips in it that also react in "Black Mirror". For example, the memory retrospective projection screen is similar to the mode in "Black Mirror Season 1: Your Whole History"; the consciousness is imprisoned in the machine, obedient and obedient, About the same as an episode of Black Mirror (forgot which one) that imprisoned a replica of a woman's consciousness in a domestic robot.

6. If the robot here changes its pupils, humans will not be able to recognize it. It is too childish. It should have more prominent identification features.

4. To what extent can we accept the existence of artificial intelligence?

1. You can't be self-righteous, you can have suggestions, but if the owner has different opinions, and the owner insists on his own opinion, the owner's opinion should be the main one, not his own. For example, the nursing robot VERA of the old professor in the play doesn't listen to the master's words at all, which is not acceptable.

2. It is absolutely not allowed to harm human beings in any form. For example, Mia scalded adults in order to protect children, and Nasika killed people. The former needs to adjust the program, and the latter's behavior completely breaks the bottom line and needs to be dealt with seriously.

3. It can be more autonomous and can understand human emotions more deeply, such as Mia's long fake smile at the beginning; Odi can't respond to the love of the old professor; the passer-by robot does not really understand drama as the owner said, these All aspects can be adjusted to be more in line with human emotional needs, but two points 1 and 2 must be followed first.

View more about Humans reviews

Related Articles