The aim of the present paper is to show the evolution of the concept of Artificial Intelligence (AI) and of the different technical methods that progres-sively informed and organized this concept. The article presents a view of the evolution of such a notion: 1) with special regards to the social, political and epistemological consequences of the chosen technical solutions, 2) with spe-cial attention to the parallel transformation of the human intelligence concept. The recent major successes of AI are based on datification and availability of huge quantity of information relative to traces left behind by people online behaviours. Big Data methods together with machine learning algorithms have the purpo-ses to interpret data and create pattern recognition methods that discover correla-tions between data series. Algorithms exploit such correlations, which are not pre-cisely causation categories, in order to produce anticipations of future behaviours, inferring regularities and measuring probabilities grounded on past actions. Moreo-ver algorithms work on the clusterization of people according to their activities and other personal characteristics, such as where they live, who their friends are, etc.The implicit foundation of data science is the induction principle, which ‘guarantees’ that the past will be similar to the future and that people that share some peculiarities tend to behave similarly in corresponding situations. It is an interpretative data organi-zation, which is obtained via the datification of online traces and the implementation of adequate machine learning algorithms. The datification itself implies that data is cleaned and arranged in a form that the program can understand. The pretence of neutrality of such complex procedures blurs the activity of interpretation that is impli-citly embedded in the system, by giving the allure of neutrality of measuring methods. The radical success of Big Data and machine learning algorithms invites to assign deci-sions responsibility to machines because they are the unique agents capable of managing the huge quantity of available data. It is more and more difficult to control the output of complex technical systems even when the results of the procedures impact human beings’ lives. As Norbert Wiener already suggested, technical systems could exclude hu-mans from feedback loops because they are too slow to catch up with the rhythm of the technical decision process. This is the first issue under discussion in the present paper. The second issue is that the machine, as Turing underlined, must only pretend to be in-telligent enough to be able to take in inexperienced judges. If it is not possible to control the actions of the devices because they are to fast and complex to be explicitly understo-od – and the system is programmed to take in humans – how can we trust machines? The third issue regards technology as a socio-technical system that, differently from science does not aim at understanding the external world, but it is a medium, a repre-sentation and an intervention that orientates the world according to social and political criteria. It is necessary to ask who is in charge of the governance of such a system and which are the objectives of such a transformation. It is crucial then to delineate rules, powers and intentions that underline the design of the sociotechnical systems in order to choose democratically which of the methods are more favourable to the whole society

Numerico, T. (2019). Intelligenza artificiale e algoritmi: datificazione, politica, epistemologia. CONSECUTIO RERUM, vol. 3(vol. 3, n. 6, Aprile 2019), 241-271.

Intelligenza artificiale e algoritmi: datificazione, politica, epistemologia

Numerico T
2019-01-01

Abstract

The aim of the present paper is to show the evolution of the concept of Artificial Intelligence (AI) and of the different technical methods that progres-sively informed and organized this concept. The article presents a view of the evolution of such a notion: 1) with special regards to the social, political and epistemological consequences of the chosen technical solutions, 2) with spe-cial attention to the parallel transformation of the human intelligence concept. The recent major successes of AI are based on datification and availability of huge quantity of information relative to traces left behind by people online behaviours. Big Data methods together with machine learning algorithms have the purpo-ses to interpret data and create pattern recognition methods that discover correla-tions between data series. Algorithms exploit such correlations, which are not pre-cisely causation categories, in order to produce anticipations of future behaviours, inferring regularities and measuring probabilities grounded on past actions. Moreo-ver algorithms work on the clusterization of people according to their activities and other personal characteristics, such as where they live, who their friends are, etc.The implicit foundation of data science is the induction principle, which ‘guarantees’ that the past will be similar to the future and that people that share some peculiarities tend to behave similarly in corresponding situations. It is an interpretative data organi-zation, which is obtained via the datification of online traces and the implementation of adequate machine learning algorithms. The datification itself implies that data is cleaned and arranged in a form that the program can understand. The pretence of neutrality of such complex procedures blurs the activity of interpretation that is impli-citly embedded in the system, by giving the allure of neutrality of measuring methods. The radical success of Big Data and machine learning algorithms invites to assign deci-sions responsibility to machines because they are the unique agents capable of managing the huge quantity of available data. It is more and more difficult to control the output of complex technical systems even when the results of the procedures impact human beings’ lives. As Norbert Wiener already suggested, technical systems could exclude hu-mans from feedback loops because they are too slow to catch up with the rhythm of the technical decision process. This is the first issue under discussion in the present paper. The second issue is that the machine, as Turing underlined, must only pretend to be in-telligent enough to be able to take in inexperienced judges. If it is not possible to control the actions of the devices because they are to fast and complex to be explicitly understo-od – and the system is programmed to take in humans – how can we trust machines? The third issue regards technology as a socio-technical system that, differently from science does not aim at understanding the external world, but it is a medium, a repre-sentation and an intervention that orientates the world according to social and political criteria. It is necessary to ask who is in charge of the governance of such a system and which are the objectives of such a transformation. It is crucial then to delineate rules, powers and intentions that underline the design of the sociotechnical systems in order to choose democratically which of the methods are more favourable to the whole society
2019
Numerico, T. (2019). Intelligenza artificiale e algoritmi: datificazione, politica, epistemologia. CONSECUTIO RERUM, vol. 3(vol. 3, n. 6, Aprile 2019), 241-271.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11590/353045
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact