I understand the truth, but who doesn't look at the phone while pooping?
Closer to home, I, who used to clamor for "technology neutrality", have really become more and more conservative with technology, especially giant technology, over the years. Since it involves a little bit of your own professional aspects, you might as well write a little more to record the changes in your own thoughts, as well as some recent thoughts, mainly to talk about two points, the non-objectivity and uncontrollability of technology.
non-objectivity
Anyone with a little knowledge of mathematics or computer science should know that an algorithm is essentially a bunch of mathematics and operations, well-defined, without any deception or ambiguity - given two sets of the same input, two will come out. The same group results (at least the same distribution). Algorithms are so "clumsy" and "honest", whether you are a capitalist or a proletariat, the algorithm does not take bribes or bribes, just does what it is supposed to do, can't this be called objective?
There is a subfield of machine learning called machine learning fairness , which specializes in the impact of biases in algorithms on social fairness. Research in this field often refers to the COMPAS dataset , which records some defendants’ personal characteristics and criminal sentence records. The researchers used this dataset to train a machine learning model to predict the probability of someone committing a crime in the future, and found that the trained model was more likely to incorrectly predict black people as prone to crime. Mathematically, this bias comes from the distribution of training data on the one hand, and from some complex causal relationships on the other, such as racial discrimination and other blacks are more likely to be convicted in court. These problems can actually be overcome or mitigated by technical means, such as resampling or the introduction of instrumental variables, but a more fundamental question behind it is: why does our society need to predict whether a person is likely to commit a crime? There are many science fiction works such as PSYCHO-PASS that have discussed this issue, including social psychology, which also studies the possible impact of such predictions on individuals and society from the perspective of self-fulfilling prophecies. Just a simple imagination here as a small thought experiment, suppose we train a model to predict crime and hope that it will most accurately identify potential criminals, and at the same time train a decision or action model to arrest the most likely Criminals, as the two models iterate on each other's data, eventually the concept of "criminal" will be completely defined by the model. When we set the goal for machine learning algorithms to accurately predict potential criminals, we are tacitly assuming that whether a person commits a crime is predetermined and cannot be changed by circumstances. There are huge problems in the setting of such assumptions and goals, which reflect the prejudice of the designers of goals and algorithms towards human society.
We know that all algorithms are proposed and applied to optimize a certain goal and solve a practical problem, but what if our goal and problem are defined wrongly? Are we going to keep using these algorithms? The problem is wrong, and it's not even wrong to improve the algorithm. It's not even wrong. The algorithm itself is meaningless when it comes to the problem it wants to solve, and the way these problems are defined happens to reflect the problem definer's understanding of a series of events. and the view of the whole world. Algorithms are not objective, they are the projection of a set of values. As long as we are still in a human society, the relationship between algorithms and the interests behind them cannot and should not be separated. It is a pity that in today's academia and industry, most of the world's smartest brains are still concerned about how to improve the performance of algorithms under specific goals, and whether our problem is correctly defined and whether our optimization goal is designed Correct, relatively undiscussed.
Here I think of the class of a teacher who studied Science, Technology and Society (STS , Chinese has not yet been translated), and his paper has a particularly interesting discussion on optimization algorithms. We have been learning optimization algorithms, maximization, linear programming, convex optimization, non-convex optimization, dynamic programming, integer programming and so on. But in fact, "optimization" is a concept created and reinforced by Western capitalist society, and behind it is a set of logic of maximizing interests, which is rare in the traditional values of the non-Western world. With the evolution of the Age of Navigation and the Age of Colonialism and Empire, this set of optimized logic has gradually been rooted in the thoughts of various ethnic groups, and has also been written into the textbooks of different countries. I don't mean to say that the concept of "optimization" is bad, or that we should not study optimization problems, on the contrary, optimization problems are important and can model many practical problems. But just because it can model too many practical problems, we should pay more attention to its scope of application. For example, students in a class or colleagues in a group compete with each other to maximize results and maximize benefits. Considering the non-zero-sum game as a zero-sum game, this kind of involution may only be beneficial to supplementary institutions and capitalists.
uncontrollable
When the film put the problems we face naked in front of me, I really realized and was surprised that a technology that we can't understand, can't control, and has a huge ability to predict the world and change the world, has already quietly disappeared. into every corner of our lives. We have been thinking about how to improve the limits of artificial intelligence, or how long it will take for AI to replace humans. What we see is always the weakness of AI in human strengths, such as relational reasoning, structure generation, etc., as if AI is still completely within our control. But in fact, what we often overlook is that AI has done a very good job of human weakness, and we know very little about it. When I was in middle school, I talked with my classmates about genetic engineering, human modification, etc. The conclusion was that this kind of technology is too mysterious and too powerful, and should not be touched until the ethics and legal provisions are perfected. Now that I think about it, genetic modification may be a younger brother to complex algorithms. I say "complex algorithm", not because the implementation of the algorithm itself is very complicated (of course it is quite complicated), but because many current algorithms, such as neural networks, are themselves a "complex system".
I personally think that complex systems or chaos theory is one of the most revolutionary concepts proposed by human beings in the 20th century. At least after I was exposed to this concept as an undergraduate, my worldview changed a lot quickly. In simple terms, a complex system means that there are multiple individuals in the system, and they have some interactions with each other. Once they form a system, unforeseen patterns of behavior may emerge throughout the system. Complex systems or chaotic phenomena appear all the time in our world, such as meteorological systems. Lorenz created this entire field because he found that he could not predict the weather with differential equations. It refers to the phenomenon that small air disturbances will greatly affect the operation of the entire weather system in the future. Other common examples include brain activity, heart beating, stock market volatility, and more. In the 1970s, Conway constructed a complex system and named it " The Game of Life ". He found a simulator here , and you can play it if you are interested. Research on complex systems and chaos theory has instantly overturned the Newtonian paradigm since the 17th century (although Einstein had already overturned it from the perspective of space-time decades earlier), and all research results tell us that the world is not You push the ball, the ball will gain an acceleration, and then move as you expected - when the world is composed of thousands of balls, even if you fully know the force of each ball, You also can't predict how the whole ball system will work. Of course, Liu Cixin's Three-Body Problem has long been popularized, and it only takes three small balls to completely paralyze the brains of any person or computer. I watched the BBC's popular science documentary "The Mysterious Chaos Theory" and thought it was very good. It also mentioned what mathematics and science can do for us in such a chaotic world. The answer includes statistics, etc. .
Back to the uncontrollability of technology, I would like to focus on the neural network wave set off by deep learning. Neural networks is a computing concept proposed by computer scientists imitating the structure of the human brain. In computing, each node is regarded as a neuron, which receives the signals from upstream neurons and communicates with it. After some processing, the resulting signal is passed on to downstream neurons. Looking at the behavior of each neuron alone is actually very simple, that is, a weighted sum and an activation function (the most common activation function at present is ReLU , which retains positive numbers and turns negative numbers into 0) . Looking at the relationship between neurons, it is actually very simple, that is, the transmission of calculation results. However, these simple and small computational neurons can play an unexpected role at the system level - face recognition, image generation, reading comprehension... A lot of theoretical work has tried to explain the operation of neural networks. mechanism, but no one dares to say that he really understands neural networks. One of the more representative discussions is the generalization ability of neural networks. Generalization refers to the ability of a model to apply knowledge gained from training data to unseen examples. To make a simple analogy, it is as if you are preparing for the final exam by brushing the questions. If you learn the questions of the same knowledge point when you brush the questions, and you can do it correctly during the test, then your generalization ability is very good. Traditional machine learning believes that the more parameters or variability a model has, the worse the generalization ability. It seems that your memory is very good and you can't forget it. In order to do every question in the question bank, the most time-saving and labor-saving method for you is to memorize all the answers without understanding at all. But when it's time for the exam, you'll be dumbfounded. The amazing thing is that the neural network does not follow this rule. There are 1.75 trillion parameters in the GPT-3 language model published by OpenAI last year , but its performance on downstream tasks completely beats the previous model. Some recent works such as Neral Tangent Kernel have elaborated theoretically on this in detail. Generally speaking, when a large number of neurons form a neural network, the entire complex system will emerge some new properties.At present, both academia and industry use neural networks as a black box. No one knows what is going on in this black box, let alone who can accurately control the black box.
What is even more frustrating is that the complexity of technologies such as neural networks is only one aspect that cannot be controlled, and the other uncontrollable factor is human society. Human society is composed of many individuals, and there are various interactions between individuals through social interaction, which is a typical complex system. In the history of human society, we can see the emergence of many group behaviors, such as the formation of political parties, the construction of economic systems, revolutions and riots, and so on. Such a complex system is already enough for sociology, economics, political science, psychology and other disciplines to study, and now we have to apply complex neural networks to complex social networks, and the results are unpredictable. As said in the documentary, these bigwigs have at most considered business-related indicators such as user stickiness when developing their products, and factors such as conspiracy theories, the prevalence of fake news, polarization of opinions and teen addiction. and other negative effects are difficult to fully foresee in the initial product development. As you can see, things are getting out of hand as we delegate more and more power to AI.
At last
Although the bosses attributed the problems brought by AI technology, social networks, etc. to the operation of the business model, including the translation of the entire film, “surveillance capitalism” was also added, but the root of these problems is not only Just in the form of economy - again, it's all about power moves. Any form of society, as long as there is a power structure, will make the algorithm have a certain purpose, and this purpose will also be due to the algorithm and the society itself. uncontrollable, resulting in unforeseen effects.
It's really touching to see the end. There are so many powerful bosses who are not only concerned about their own personal interests, but use their own perspective to think for all human beings, especially the growing next generation. How to say , the glimmer of human civilization in the barbaric slaughterhouse.
View more about The Social Dilemma reviews