Artificial Intelligence, a map of issues
Clara Punzi
The way we live and work is changing rapidly due to digital technologies and Artificial Intelligence. A workshop held on June 10, 2025 at the University of Pisa, organized by the Artificial Intelligence research group of the Scuola Normale Superiore di Pisa and the Institute for Advanced Studies “Carlo Azeglio Ciampi” of the Scuola Normale with the support of the “Fondazione FAIR – Future Artificial Intelligence Research” explored problems and possible solutions on the front of work, rights, information, possible policies and risks of military employment.
The large-scale adoption of Artificial Intelligence (AI) is transforming the way we live and work. Driven by a rapid technological acceleration, supported by the economic strategies of large digital companies and the political choices of governments, Artificial Intelligence is redefining labor markets, production models and the world economy. This change raises urgent matters: the destruction of jobs, algorithmic control, the power of digital platforms, growing inequalities, structural and cultural transformations of society.
These were indeed some of the issues discussed during the workshop “AI, Labour and Society”. The meeting saw the participation of scholars from different disciplinary fields and geographical contexts, giving rise to a rich and articulated comparison on the trajectories of technological innovation, which highlighted how these are not limited to mere technical transformations, but are deeply intertwined with power dynamics, social inequalities and political conflicts.
A first key theme concerns the ambivalent relationship between technology, work and social structures: digital technologies offer the possibility of developing numerous human skills and, above all, production; these technologies they present the risk of replicating or exacerbating mechanisms of exploitation. Since the first industrial revolution, technological progress has contributed to changing the relationship between capital and labour, generally to the advantage of the former, through control over property rights and access to technologies. The most recent advances in Artificial Intelligence reproduce this ambivalence: on the one hand they promise to free up time, reduce repetitive tasks and open up new employment opportunities, on the other they can make entire categories of jobs obsolete or introduce management models that increase precariousness and weaken rights and protections. Studies in this field [1] suggest that the impact of Artificial Intelligence on work varies according to professions and skills, determining a widening of the gap between high and low-skilled occupations, with consequent polarization of wages.
Behind the deceptive presumption of neutrality of AI and the algorithmic provision of services, there is in fact a significant exercise of social power that translates into the consolidation of a monopolistic control by the dominant players in the technology sector over infrastructures (the so-called Big Tech) and data flows. This process has profound repercussions on the entire society, limiting individual freedom and the ability to act autonomously, with particularly damaging effects for the most vulnerable groups. The negative implications are not limited to the social level, but also extend to the geopolitical one, where digital colonialism based on the asymmetry between states in access and control of technologies is increasingly marked and evident [2]. A key example is how the platform economy has developed, an economic model based on digital infrastructures that act as intermediaries between users, companies or workers, often through automated management, evaluation and payment systems, profiting from the interactions between the parties involved. This model has transformed numerous sectors, redefining the methods of consumption, production and employment. Specifically, platform work includes extremely heterogeneous professional activities that can be carried out both online, or remotely, such as translations, programming, data entry or microtasks, and on location, or through the algorithmic assignment of services then provided in the physical world, such as home deliveries, passenger transport or domestic assistance. Both of these categories often operate in conditions of strong job uncertainty and increasing precariousness, with limited protections and a dangerous exposure to the so-called algorithmic management: an automatic management system that determines access to positions, evaluates performance and establishes compensation.
On the one hand, digital platforms present themselves as tools capable of offering greater flexibility and access to work compared to traditional channels, and attract the willingness to work of the most penalized categories, such as migrants, often excluded from other forms of recruitment. On the other hand, the most recent economic analyses reveal how behind this promise the possibilities of collective organization are weakened, surveillance is expanded, even in opaque and degrading forms, the processes of alienation and exploitation are aggravated [1, 3].
To address these issues, the workshop “AI, Labour and Society” has opened a dialogue between social science researchers working on the socio-economic consequences of technology, and experts who design and implement Artificial Intelligence systems. This collaboration is essential not only to mitigate the potential negative consequences on society, but also to direct technological progress towards more equitable and sustainable outcomes. Technology, in this sense, is never neutral: it reflects and reconfigures existing economic and institutional relationships, helping to define, often in an opaque and unequal way, the directions of economic and social change.
A common horizon for research and policies
An important point in this debate concerns the way in which policies – in Europe in particular – can design the development trajectories of Artificial Intelligence, with the aim of ensuring that the ways in which digital technologies are developed, used and monitored do not conflict with shared values and democratic principles. The Roadmap for AI Policy Research [4], developed on the occasion of the AI Policy Research Summit held in Stockholm in November 2024 and presented in Pisa by Petter Ericson (researcher in Responsible AI at Umeå University, Sweden), defines a shared vision that promotes research on the policies and governance of a “responsible” Artificial Intelligence, acting as a reference framework for collaboration between academic institutions, industry, governments and civil society. This vision, based on scientific evidence, ethical principles, sustainability and inclusiveness, establishes a series of fundamental principles, research priorities and concrete actions for a trajectory of technological development in which the impact of Artificial Intelligence on society is beneficial and ecologically sustainable.
Working for the algorithm
A significant part of the Pisa workshop was dedicated to a critical analysis of the role of Artificial Intelligence in the economic system. The impact of digital platforms on local labor markets was documented by the analysis – by Jacopo Tramontano and others – of Amazon’s expansion in Italy [5], showing how the arrival of new warehouses, while generating increased productivity and new jobs, entails overall negative effects on wages and local employment, also strengthening mechanisms of strong disruption and competition with local businesses. In parallel, a quantitative analysis of the Italian context [6] highlighted how platform workers do not constitute a homogeneous category but present different profiles of vulnerability and degree of dependence on this type of employment, determined by a combination of economic-work factors and individual conditions, such as gender, age, geographical location and level of education. Overall, the analysis shows that the critical issues related to platform work do not, however, cancel its attractiveness, especially in situations of economic disadvantage, highlighting at the same time the inadequacy and inefficiency of the traditional labor market.
Legislation for the protection of rights
Can legal provisions be an answer to these problems? Giorgio Pedrazzi’s presentation [7] examined the potential of the legal instruments currently available for the protection of the rights of platform workers. European legislation already allows for the application of legal tools, in particular algorithmic transparency (designed to reduce opacity by requiring platforms to disclose how automated decisions that impact workers, such as performance evaluations, payments and disciplinary actions, are made) and data portability (designed to enable direct control of one’s work data, thus strengthening bargaining power and mobility between platforms). Combining these two aspects can be useful to mitigate the damage resulting from algorithmic management practices in the platform economy. The challenge remains to translate these regulatory measures into effective technological practices that aim, among other potential objectives, at standardizing technical protocols, strengthening regulatory capacity, explicitly supporting workers’ collective actions, and increasing international cooperation on governance. Pedrazzi points out, however, that technical-regulatory approaches are already emerging that demonstrate how a fruitful integration between laws and technologies is possible. One example is that of data cooperatives, intermediary organizations such as Worker Info Exchange that allows workers to collectively exercise their rights over work data. By sending requests to platforms for access to personal data, these cooperatives enable the collection, management, and reuse of data, promoting systematic and independent controls, rights campaigns, and more transparent control of algorithmic management. These tools represent, taken together, fundamental elements for strengthening protections in platform work.
Information in the age of Artificial Intelligence
The discussion at the Pisa workshop also addressed the problems of information, an area in which the adoption of generative Artificial Intelligence raises significant concerns also on a democratic level. Riccardo Corsi’s analysis [8] has shown that Artificial Intelligence, in addition to putting at risk the fundamental principles of journalism, such as the verification of sources, respect for privacy and the fight against hate or discriminatory content, can alter the entire information ecosystem, undermining the public’s ability to access reliable information and consciously direct their behavior in the digital space as well as in the physical one. While some are openly opposed to the adoption of Artificial Intelligence, others support inclusive governance strategies, fair remuneration models and participatory design of new technologies. The Italian case of Il Manifesto exemplifies an alternative path in which innovation is not subordinated to opaque contracts with Big Tech, but is co-developed internally by the editorial staff and therefore transparent and ethically aligned with journalistic values. MeMa, an acronym for Memoria Manifesta, is a concrete example of “community Artificial Intelligence”, i.e. a generative AI tool designed and developed together with journalists to support their professional activities through the critical and contextualized use of the contents of the newspaper’s historical archive.
The shadow of the military
Finally, Dario Guarascio highlighted the interdependence between civil and military trajectories in the development of Artificial Intelligence and the emergence of a digital-military-industrial complex [9]. As already discussed here, this convergence is the result of an interaction between civilian objectives (such as market efficiency, cost reduction, competition between oligopolies and mass consumption) and military objectives, linked instead to logics of command, control, maximum performance and strategic priorities, often associated with frequent economic inefficiencies. This dynamic is the result of the strategies of large digital companies on the one hand, and of the policies of the United States government on the other and contributes to redrawing economic choices, technological hierarchies, global geopolitical balances. In the United States, Alphabet, Amazon, Apple, Meta and Microsoft – the digital monopolists – developed in the wake of research on microelectronics and information technology that had been funded in the past decades by DARPA, the agency of the United States Department of Defense responsible for the development of new military technologies and which had also supported many of the current digital technologies. Today, these companies have surveillance capabilities, control critical infrastructures and technologies such as cloud computing, submarine cables and satellite networks and have thus assumed a central role in the security, intelligence and defence strategies of the United States. On the one hand, they offer US politics tools of control and surveillance, and on the other, they extend their digital domain globally. This link strongly influences industrial and innovation policies, often marked by the promotion of dual-use technologies (usable in both the civil and military fields). Even in the digital world, we find the mechanism of “revolving doors” between corporate leadership positions and public and political roles, which sees the same people change roles, but pursue the same profit and power strategies. In this context, the State appears both subordinate and complicit in Big Tech: unable to do without them, it delegates strategic functions in the civil and military fields to companies, while at the same time strengthening their power, even on the most geopolitical year, as demonstrated by the cases of Starlink in the war in Ukraine or the supply of Artificial Intelligence technologies by Google to the Israeli army.
In light of the challenges that have emerged, it is urgent to promote new spaces for discussion, interdisciplinary and intersectoral collaborations, which help to define the developments of AI research in a clearer and shared way. A central issue remains the involvement of all interested actors, in particular those who directly suffer the effects of technological transformations such as platform workers, but also Big Tech themselves, whose impenetrable ownership model poses significant obstacles to the advancement of research. In this scenario, research also finds itself questioning its role and the margins of autonomy it has, since it often depends on infrastructures, data and funding provided by the same dominant actors. To address these tensions, it becomes essential to promote broader public awareness on the issue, so that technological and political choices do not continue to be imposed from above, but become the object of democratic and collective participation, a necessary condition for orienting technological development towards truly beneficial outcomes for society as a whole.
References
- Pianta, M. (2020). Technology and Work: Key Stylized Facts for the Digital Age. In: Zimmermann, K. (eds) Handbook of Labor, Human Resources and Population Economics. Springer, Cham. https://doi.org/10.1007/978-3-319-57365-6_3-1
- Muldoon, J., Wu, B.A. Artificial Intelligence in the Colonial Matrix of Power. Technol. 36, 80 (2023). https://doi.org/10.1007/s13347-023-00687-8
- Zuboff, S. (2023). The age of surveillance capitalism. In Social theory re-wired (pp. 203-213).
- Dignum, V., Régis, C., Bach, K., Bourgine de Meder, Y., Buijsman, S., de Carvalho, A. P. L. F., Castellano, G., Dignum, F., Farries, E., Giannotti, F., Anh Han, T., Helberger, N., Hellegren, I., Houben, G.-J., Jahn, A., Joshi, S., Lamine Sarr, M., Lewis, D., Lind, A.-S., … Tucker, J. (2024). Roadmap for AI policy research. AI Policy Research Summit, Stockholm, November 2024. AI Policy Lab, Umeå University. https://aipolicylab.se/news-and-events/ai-policy-summit/roadmap-for-ai-policy-research/
- Tramontano, J., Cirillo, V., and Guarascio D. (2025). The Impact of Amazon on Italian Local Labor Markets – a Staggered Difference-in-Differences Approach. AILS 2025 Workshop, June 2025. University of Pisa.
- Punzi, C., Cirillo, V., Guarascio D., Pellungrini, R. and Giannotti, F. (2025). Platform workers not by chance. A machine learning approach to explore digital labor markets. AILS 2025 Workshop, June 2025. University of Pisa.
- Pedrazzi, G. (2025). Bridging Law and Code in Algorithmic Management. Empowering Worker Rights Through Transparency and Portability. AILS 2025 Workshop, June 2025. University of Pisa.
- Corsi, R. (2025). Large Language Models and the Public Arena – A Threat to Democracy? Insights from Italian Journalism. AILS 2025 Workshop, June 2025. University of Pisa.
- Guarascio, D. and Pianta, M. (2025). Digital technologies: civilian vs. military trajectories. LEM Papers Series, Laboratory of Economics and Management (LEM), Sant’Anna School of Advanced Studies, Pisa, Italy.