AI: the role of the state and a progressive narrative

Artificial intelligence between hype and hysteria

According to Max Tegmark, a leading machine learning researcher at MIT, the rapid advances in the development of AI and what they mean for human beings is perhaps the “most important conversation of our time” (Tegmark, 2017). AI technologies have the potential to significantly change the course of our collective future, reshaping the creation of economic value, the world of work, health and social care, governments and solutions to the climate crisis. Devising policy strategies on artificial intelligence (AI), it is crucial to acknowledge its evolution from a once ambiguous term to a tangible breakthrough innovation that touches upon most aspects of society and economy. What once represented a mostly scientific fascination about the possibilities of an artificial human-like mind now encapsulates a spectrum of innovations that are at the core of an initial stage of a major wave of technological advancement. 

Yet, technological change must not be an end in itself. New technologies need to tackle the big challenges of our time, including the economic and social inequalities, the climate crisis, tomorrow’s work, and economy, demographic changes, regional imbalances, and the increasing political polarisation in democracies. At the heart of contemporary AI lies the concept of automatic learning, a sophisticated application of statistical methodologies. The emphasis is not on creating machines that think autonomously, but rather on developing systems capable of discerning patterns within vast datasets and improving and innovating existing systems and processes. The creative power of major technological advancements is pivotal for human progress. From previous periods of technological upheaval, we know that novel ways of producing, working and living are a fundamental driver of societal change. Optimistic and pessimistic scenarios abound from an ongoing and unfolding public and policy debate. In this chapter, discussing risks and potentials of AI, we examine four central domains–the public sector, democracy &  media, the economy and work & labour markets–where AI can be harnessed for the greater good of society and that are particularly relevant to decision makers and the public. In the years to come, it will be crucial that progressive decision makers rise to the challenge of bringing forward a compelling narrative and policy agenda of how AI can make a contribution to a just, equitable and sustainable society and economy and how it can be achieved. Since the public perception plays a pivotal role in envisioning and determining trajectories of AI and its integration in society (Richter et al., 2023), this chapter concludes with a suggestion of what may be the role of the state in guiding the transition to a society and economy with AI as well a proposition of the most important ingredients for a political narrative that is based on progressive values and addresses both the social, economic and ecological implications of AI technologies. We suggest that progressives  emphasise the dual role of the state, which should act as both a regulatory force and an intervening power to guide the development of AI, harness its social, ecological and economic potential and mitigate the risks for the most vulnerable.

The Public Sector

In the provision of public goods and governmental services from health, social care and welfare to security, education, research or employment,  AI present both ample opportunities and risks. As elsewhere, the biggest gains of AI for the public sector lie in driving efficiency and productivity, and if implemented in a cautious, transparent and accountable manner, may increase trust in the public sector and democratic institutions overall. Increasing efficiency of public services has three dimensions: improving governmental productivity, personalising services and making better policy decisions. First, it can automate bureaucracy by standardising public administration processes and making them easier and more accessible, for instance by speeding up the process of applications of financial assessments for the eligibility of social welfare or setting up a business. These productivity improvements have the potential to streamline operations and improve service delivery, thus contributing positively to the functioning of the public sector as a whole. In the best case scenario, AI can help shift the focus of the human workforce to interpersonal contacts and enhance citizens’ experience in bureaucratic processes (Wirtz et al., 2019). Second, AI can improve the personalisation of government services by leveraging data to understand the preferences, behaviours and needs of citizens. This is at the core of innovation in the private tech sector and must lead the way for government services too so that citizens get the feeling that the state works for them, and it is not just the other way around. Third, AI can be of use to design and improve policymaking at all levels of government, e.g. in analysing spatial and aerial images of urban environments to improve road infrastructure or mitigate the effects of climate change. Beyond efficiency and productivity, AI can support the public sector of acting as a social entrepreneur in providing public goods for the common good. By pooling data and information with AI and making it accessible to everyone, governments can help to boost social and economic innovation. 

However, the pace of public sector innovation, in many European countries, remains slow, posing a challenge to the effective integration and utilisation of AI technologies. Moreover, the widespread adoption of AI applications brings forth substantial risks, notably due to the scale of their deployment, the risk of introducing systemic bias into AI systems, data security concerns, the opaque decision-making of AI systems that affects humans (the so-called ‘black-box problem’) and the scarcity of knowledge and skills in both the private and public sector (Richthofen et al., 2022). As the Post Office scandal in the UK, also known as the Horizon saga, illustrates, decisions made on the back of faulty technologies may have devastating consequences for individuals as well as those responsible for using it and pose severe challenges for cultivating public trust in new technologies. Whereas the Horizon scandal was due to a faulty software, the challenging area of AI bias is more complex and can come in the form of systemic, statistical, and human wrongdoings (Schwartz et al., 2022). As a consequence, AI can reinforce existing societal problems by replicating discriminatory societal structures or at its worst automating the discrimination of women* and minority groups and eroding trust in the ability of the public sector in championing AI services. In many cases, projects that digitise public services are too big and ambitious and often fail (Heckman, 2024). In places where local and active governments deliberate with citizens about the risks and potentials of new technologies and develop practical solutions, e.g. in the city of Helsinki, there has been significant progress in safely adopting new technologies and increasing trust. In order to address these challenges, it is crucial to adopt a cautious, transparent and accountable approach of implementation that fosters trust within the public. Launching small or medium-sized pilot with the input of citizens projects can serve as a beacon for testing AI applications and creating public acceptance. Avoiding premature large-scale initiatives and instead focusing on leveraging synergies with existing digitalisation projects can mitigate risks and maximise benefits. Think small rather than big. Instead of establishing independent public AI systems, such as government sponsored LLMs, collaboration with the European AI industry can boost innovation in the public sector and ensure compliance with ethical standards. A central instrument that can help to foster public acceptance could be an AI transparency register on all algorithms used by the public administration. For instance, the Dutch authorities published a nationwide AI register in 2022 (Weeke, 2023). Finally, regulatory measures must counter systemic or statistical biases in AI algorithms, ensuring fairness, accountability and trust in decision-making processes and public services.

Democracy & Media

The most recent acceleration of processing and transmitting information through the internet and more specifically social media has laid bare the risks AI poses to the spread of disinformation and democracy overall. The possibility to supercharge microtargeted political campaigns through AI poses a real danger to a public discourse that is vital to democracy. Using AI in the manner of so-called “hypernudging” processes, is a strategy targeted at limiting citizens’ ability to reflect upon available political options (cf. Morozovaite, 2022). In the 2010s, a scandal evolving around Cambridge Analytica made headlines when the company collected and analysed personal data from millions of Facebook users without their consent and used it for political advertising in the US elections and the UK Brexit campaign. OpenAI is currently facing similar allegations and investigations by European data protection authorities over its data collection practices, prompting the MIT Technology Review to title a topical report “A Cambridge Analytica-style scandal for AI is coming” (Heikkilä, 2023). Concordantly, the a recent Freedom house report finds that “Generative artificial intelligence (AI) threatens to supercharge online disinformation campaigns” (Funk et al., 2023). Such campaigns are not exclusively but most effectively used by right wing political forces. If used by political parties, this contributes to the general trend of democratic backsliding, by turning political actors “away from democratically competing over the best arguments to unscrupulously competing over the best manipulation of emotions” (Lamura & Lamura, 2023). This may ultimately lead to the erosion of public deliberation on politics.

Additionally, it is essential to acknowledge and address the inherent structural biases present in AI systems, which can perpetuate discrimination and inequality, particularly in political contexts. 

Conversely, AI can also aid in simplifying fact-checking processes and bolstering data journalism. By harnessing AI tools, journalists and fact-checkers can more efficiently verify information, enhancing the integrity of the public discourse and promoting informed decision-making among citizens. AI also holds promise in political education by facilitating personalised learning experiences and providing access to diverse perspectives. 

In order to address these challenges and harness the potential benefits of AI to promote a pluralistic and democratic public discourse, proactive measures are necessary. Firstly, there is a need to build up the capacity of NGOs and media organisations to counteract misinformation and disinformation campaigns.

Additionally, social media platforms must be held accountable for their role in propagating harmful content, and stringent regulations should be enacted to mitigate the spread of false information. Regulatory initiatives, including the Digital Service Act, lay the groundwork for more accountability and control of online content with regard to the spread of illegal content, disinformation and transparent advertising. The Artificial Intelligence Act (AIA) of the European Union is currently taking a global centre stage in pioneering and shaping similar regulation for AI, thereby opting for a risk-based approach based on liberal democratic values. Yet, there remains more work to do. In addition to the attempt of shutting down content, there progressives should aim at building capacities within civil society, media and education sector enabling them to effectively engage in standardisation processes (advocacy function) and effectively oversee the compliance of governments and the AI industry with given standards in the future (watchdog function). Simultaneously, a concerted effort to educate the public becomes paramount, with public media playing a central role in shaping an informed citizenry capable of discerning truth from misinformation. By combining regulatory measures with proactive public education, progressives can build a more resilient democratic fabric in the face of evolving challenges in the digital age.

The Economy

AI technologies, in particular generative AI, are believed to lead to fundamental industrial transition; boosting economic growth, raising productivity levels and creating new jobs. Business and industries are rapidly adapting to these developments, with private investments of $120 billion in Generative AI reaching a record high in 2021. According to McKinsey (2023) Generative AI could add $2.6 trillion to $4.4 trillion to the global economy with a significant impact on nearly all industrial sectors. For leveraging higher public investments driven by a new industrial policy (Jung 2023), AI may represent a cornerstone in reshaping the innovation and industrial landscape, bolstering Europe’s sovereignty and reducing regional inequalities. Equally important, public and private investments in crucial green technologies boosted by Generative AI can help to avert more than half of the climate tipping points in the next five years, making green technologies competitive in key markets and accelerating decarbonisation worldwide (Stern & Romani, 2023). However, materialising these potentials is associated with a number of hurdles. Because rebound effects, e.g., in form of energy and water needed for cooling computing systems, threaten to undo gains in terms of energy and resource efficiency, it is essential not only to not only consider more efficient production or sustainable operation of AI infrastructures (sustainable AI) but to work on AI for sustainability, i.e. deploying technology for specific tasks in the ecological transformation of industry and society. In the knowledge economy, this means to employ AI for developing intelligent systems, with the ability to consider the complexities of environmental governance and the ability to process real time data, to provide the knowledge needed to sustain life.

Besides the ecological and economic potentials for businesses and industries, two major concerns require attention: the risk of increasing market monopolies and the loss of individual agency. On a structural level, the rapid growth of AI technologies, particularly generative AI, may lead to a manifestation of an already existing oligopolistic tech sector driven by scale, network effects, and feedback loops. Weak antitrust enforcement in the past has contributed to a handful of dominant tech companies controlling digital markets. These gatekeepers use their unprecedented access to computing infrastructure, data and expertise to influence the development and commercialisation of AI (Lynn et al., 2023). This will most likely lead to a dependence of smaller AI firms on the data infrastructure provided by tech giants. However, there is a strong case to make for a level playing field in the tech sector and the development of AI boosting competition and responding to the needs of consumers, businesses, governments and citizens. Accordingly, good AI governance necessitates not only fundamental data usage regulations to counter the de facto data ownership by tech giants but also demands proactive utilisation of available tools to combat anticompetitive behaviour within the AI domain.

On an individual level, the manipulation and exploitation of consumers and their personal data in a commercial context–through voluntary or forced subordination–poses a threat to individual agency. First and foremost, it is up to the individuals to take responsibility of their digital self-empowerment, consciously making informed decisions in order to balance between machine assistance and paternalism by a machine. Yet, this balance can only be struck, if the market offers a sufficient selection of AI applications in which customers are not also the product. The current situation presents a classic case of market failure: Start-ups that could develop user-friendly and ethical bots often lack the necessary data to train them. Users, on the other hand, are reluctant to pay for digital services, opting for free services that leverage user-generated data. To remedy this, government intervention can be key. Implementing regulations, such as transparency requirements and providing open access to learning data, can encourage the development of diverse and user-centric AI solutions. Government support, including seed funding for start-ups developing neutral assistant programs, can further foster market diversity (Ramge, 2020).

In conclusion, the potential of AI promises a transformative impact on economic development, productivity, and sustainability. Yet, realising these benefits entails addressing significant challenges. These challenges encompass the need for sustainable AI practices and robust governance, such as fundamental data regulations and proactive measures against anticompetitive behaviour. Balancing individual empowerment with market dynamics is pivotal, emphasising the necessity for a diverse AI landscape and government intervention to foster ethical, user-centric solutions to ensure the responsible evolution of AI in the global economy. As a leader in regulation, Europe is an important player. While it must use its regulatory power to minimise risks and lower barriers to market entry, it also needs to recognise, that the European technology industry requires extensive public and private investments to maintain competitiveness against Asian and American counterparts in the long term.

Work & Labour Markets

In the current discourse on AI, short-term risks are often taking centre stage. One of the most prominent concerns revolves around the displacement of jobs due to automation and substitution of tasks driven by AI technologies. As AI systems become increasingly capable of performing tasks traditionally carried out by humans, there is a palpable fear of widespread unemployment and economic instability. Concerns echo past discussions on technological innovations, yet history emphasises the need for an open-minded but vigilant approach by governments and organised labour for harnessing new technologies’ potential.

Ten years ago, Osborne and Frey (2013) kicked off a public debate on “technological unemployment” with their analysis on the susceptibility of different occupations to replacement by automation. Historically, however, emerging technologies have tended to generate more jobs and increased welfare rather than diminishing it. Yet, this pattern held true primarily because technology was often utilised to automate routine tasks, providing workers with the opportunity to enhance their skills and transition to roles demanding a higher skill set and cognitive abilities. The advent of AI introduces a potential shift in this pattern, given its capacity to perform complex cognitive tasks affecting low- and high-skilled workers in manufacturing, service sector and beyond alike. Especially, the rapid rise of LLMs in 2023 has led to a resurgence of fears about the AI’s potential negative impact on jobs across all levels of the income and skills scale. Countering this fear, a recent MIT study finds that most AI applications are currently far too expensive to replace humans in most professions (Svanberg et al., 2024). Yet, the cost-centric nature of this argument prompts questions about its universal applicability and, more fundamentally, whether cost is a sufficient indicator for AI allocation. Turning the perspective to the “input side” of AI systems, i.e. human labour involved in training and maintaining AI models, we propose that the perspective offered by the MIT study is insufficient. Firstly, as Casilli and Kill (2024) have argued, the increasing rollout of AI applications–contrary to the perceptions held by the public–increases the demand for human labour, for instance by creating tasks like system training and content moderation. However, this so-called “micro-work” is not inherently synonymous with “good work” as it frequently involves outsourcing to regions with lower wages and substandard working conditions. Notably, for companies, it can be cost saving if they employ workers to simulate an AI model rather than running it autonomously. Even in the case, that people “only” do training and maintenance work on AI systems, reports have pointed to exploitative working conditions of so-called “click workers” (Eldebani, 2023). Therefore, progressives should scrutinise the ramifications of exclusively adhering to a cost-centric rationale. A different vision for the relation of AI to work, is stated in the German workers association’s (DGB) concept paper “AI for Good Work” namely, that “one of the primary objectives should be to use AI as assistance systems in order to reduce work loads and promote Good Work” (DGB, 2020: 4) To meet workers’ actual needs, effective AI implementation requires transparent communication from the outset, clear, collective purposes and institutionalized procedures for collaborative decision-making. 

Beyond the changes AI introduces to the labour market, it is already changing the world, the place, experience and culture of work. Tangible changes in the workplace are taking place along three dimensions that we accordingly call “bricks (physical dimension), bytes (digital dimension), and behaviour (cultural dimension)”. “Bricks” refers to the physical space of the office and its reconfiguration in the light of the emergence of the knowledge economy. Here, AI can improve workplace conditions, for instance by optimising energy use, transport, and safety monitoring. On the digital level (“bytes”), AI shows much potential to automate routine tasks, accelerate communication and enhance data processing speed. Examples like AI-driven inventory management in retail highlight the potential cost reduction benefits, but at the same time pose questions about the evolving nature of traditional job roles. The cultural dimension (“behaviour”) entails a shift from human-to-human to human-to-”non-human” interactions, with AI-driven technology reshaping training, knowledge production, and interpersonal dynamics. For instance, companies increasingly introduce AI-driven “co-pilots” or physical AI-powered “cobots” to assist their workers in everyday tasks. While AI technologies generally boost productivity, they also raise concerns about data security, transparency, fairness, and accountability. Notably, experts warn, that AI applications in the workplace could interfere with workers’ right to informational self-determination, especially through increased surveillance pressure (cf. German Federal Government, 2024). Hence, progressives must address risks at an early stage and put their concepts of good work forward to encourage a holistic debate on the trade-offs and repercussions of broad AI implementation.

In summary, this translates into a twofold quest for  policymakers is: First, to encourage  productivity growth – where it translates into increased advantages for the worker. Essentially, this means to refrain from conflating profitability with genuine progress. A simplistic focus on profitability in tech development and deployment often proves short-sighted, neglecting the often invisible (future) societal costs. Concordantly, pursuing genuine progress means to direct AI development towards improving working conditions, raising wages, and advancing society as a whole. To attain this objective, policymakers, second, need to actively collaborate with worker- and employer associations in the development of policy frameworks for the implementation of AI in the workplace. A progressive AI labour policy builds on the active engagement and empowerment of all relevant stakeholders. Besides engaging stakeholders in policy formulation, this entails facilitating fair and inclusive social dialogues over the introduction of AI to the workplaces themselves, as well as to build robust mechanisms for feedback and evaluation at multiple levels, so that company practices can be reviewed, and government policies potentially revised in the future.

The Role of the State: Towards a progressive narrative on AI

With the emerging challenges, problems, and opportunities brought forward by the AI wave, progressive policymakers are compelled to craft and communicate a narrative on AI development and based on what policy goals should be at the core of a reform agenda. AI currently stands as an important canvas upon which various positive and negative imaginations of our societal and economic development are projected. This may contribute to feelings of individual and collective anxiety that is in parts driving the support for right-wing populists and extremists. For progressives, the challenge lies in identifying a forward-looking and bold narrative as well as policy strategy that brings together the opportunities and risks of AI technologies in relation to progressive values and policy goals: economic and social equality, sustainability, and the stabilisation of our democracies. It should be in line with progressive values and challenge emerging narratives from conservatives and the far-right. Therefore, a narrative is needed that is principled, addresses the conflict of interests and avoids both hype and hysteria, and is concrete in guiding action without getting lost in abstraction.

At present, we find ourselves in a situation where the political debate on AI on the international stage circles primarily around the vast economic potentials and the race for AI global leadership. This is not a surprise given the agenda setting power of big business over AI in the media coverage (Richter et al. 2023). A development leading to a situation in which collective societal imaginaries of AI “are increasingly dominated by technology companies that not only take over the imaginative power of shaping future society, but also partly absorb public institutions’ ability to govern these very futures with their rhetoric, technologies, and business models” (Mager & Katzenbach, 2021). While it is crucial for progressives to reclaim this discourse, societal perspectives also greatly vary with regard to the role ascribed to governments in influencing AI development, just as different governments hold diverse visions for AI’s purpose. As Guendez and Mettler (2023) demonstrate, many governments’ narratives share similarities but give different priority to each of the various roles that the government can play in shaping a world with AI (e.g., “leaders”, “enablers”, “users” and “regulators”). The striking resemblance in the narratives raises a red flag for progressives: there appears to be significant ambiguity surrounding the state’s importance and the roles of other stakeholders. 

At the core of any government activity should be balancing the conflict of interests that are at stake and setting the political and regulatory framework that gives both public and private sectors a clear sense of the direction of travel. Accordingly, in addition to defining the role the state should play in the governance of AI, it is critical to scrutinise the visions that states in the global arena are currently pursuing for AI and develop a distinguishable progressive narrative that is both value-driven and compelling in its articulation. As Bareis and Katzenbach (2021) show, in China, AI is primarily envisioned as a tool of social order and regulation. In the US, it is framed as a powerful tool for economic growth, which is to be achieved by simultaneously investing in key technologies (e.g. semiconductors), empowering workers and deregulating markets. Finally, core EU states, including Germany and France, are advocating for a value-driven approach to AI, yet they face a challenge in translating these values into actionable strategies. Conclusively, we contend that a progressive European narrative should steer clear of both illiberal aspirations for excessive social control and the enticing promises of the ultra-liberal deregulation dogma. Democratic states must actively engage, keeping in mind that nuanced regulation serves as a potent tool to foster innovation and competition, while offering the opportunity to align the trajectory of change with progressive objectives. Given the analysis outlined above, we conclude that the state should play a dual role as both a regulatory force and an intervening power to guide the development of AI to harness its potential and mitigate the social and democratic risks. To achieve this purpose, the state should become active in four key areas. First, the state should address the potential failures in the marketisation of AI and address monopoly-like structures in the tech economy. The concentration of the means and skills to develop AI in the hands of a few companies will stifle innovation in the medium- and long-term. This may be stating the obvious, given that the EU Digital Markets Act came into force in November 2022. Yet, the priority of progressives should be to monitor how existing regulation unfolds, learn from potential shortcomings and adjust where needed. Second, there is a strong case to make for setting incentives to align the AI revolution with securing environmental, anti-discrimination and labour standards and update them accordingly. This mission-purpose approach to AI opens an opportunity for progressives to put Europe on a path for AI that distinguishes itself from the Shenzen state capitalism in China and the Silicon Valley approach to free markets in the US. It would be a strong signal to voters that the emphasis is on the quality of economic development, along with the potential productivity gains that AI will bring to society and economy. Third, progressive governments must make sure that as the implications of AI unfold for society and economy, it is crucial to protect those who are most vulnerable to disruption, especially workers and potentially businesses who will fall behind, and also safeguard democracy and the pluralistic public discourse.

Fourth, a European vision for AI must transcend the prevalent techno-utopian belief that merely deploying AI to address societal challenges will inherently lead to social progress. Rather, a distinctive progressive approach should – beyond boosting and regulating AI – flank technological innovation with complementary social innovation in the functional and organizational areas surrounding its application. In the end, the non-technical, economic, social and political factors associated with these areas are crucial in shaping how the “utilization potential of technologies will be exploited and which consequences for social development will manifest” (Hirsch-Kreinsen & Krokowski 2024: 4).

Hence, a progressive narrative on AI may be constructed along the following. The latent trends accelerated by AI, such as increasing inequality, the evolution towards a service-based economy, and media polarisation, underscore the importance of adopting a progressive approach to its implementation. These trends may exacerbate existing societal divides. Therefore, they highlight the pressing need for equitable access to AI technologies and the democratisation of their benefits. There is vast potential in using AI to achieve progressive goals: from enhancing productivity in public administration to mitigating skills shortages, leveraging data-driven policies, and streamlining processes in the private sector to worker’s benefit. However, realising these benefits requires a concerted effort to address issues of centralisation, monopolisation, and the exclusion of a diverse set of voices in decision-making processes. Establishing a robust AI industry in Europe, fostering acceptance of AI in the public sector, and bringing together stakeholders from all sectors of society are critical endeavours. Building AI capabilities within civil society, media and guarding against democratic backsliding in the face of widespread adoption of disruptive AI applications are paramount. By reframing the discourse and putting these issues at the core of a progressive narrative on how AI can be a force for positive change, progressives can inspire collective action towards a future where AI serves as a tool for empowerment of citizens and workers rather than division. It is through a shared commitment to inclusivity, accountability, and innovation that we can steer the course towards a more equitable and prosperous future powered by AI.

Literature

Bareis, J., & Katzenbach, C. (2022). Talking AI into being: The narratives and imaginaries of national AI strategies and their performative politics. Science, Technology, & Human Values, 47(5), 855-881.

Casilli, A., Kill, N. (2024). „Je weiter der Ausbau der künstlichen Intelligenz voranschreitet, desto größer wird der Bedarf an menschlicher Arbeit“. Soziopolis. Retrieved from: https://www.soziopolis.de/je-weiter-der-ausbau-der-kuenstlichen-intelligenz-voranschreitet-desto-groesser-wird-der-bedarf-an-menschlicher-arbeit.html 

DGB – Deutscher Gewerkschaftsbund (2020). DGB Concept Paper; Artificial Intelligence (AI) for Good Work. Retrieved from: https://www.dgb.de/downloadcenter/++co++b794879a-9f2e-11ea-a8e8-52540088cada 

Eldebani, F. (2023). click work – it’s complicated. Superrr Lab. Retrieved from: https://superrr.net/2023/06/20/clickwork.html 

Funk, A., Shahbaz, A., and Vesteinsson, K. (2023). “The Repressive Power of Artificial Intelligence,” in Funk, Shahbaz, Vesteinsson, Brody, Baker, Grothe, Barak, Masinsin, Modi, Sutterlin eds. Freedom on the Net 2023, Freedom House.

German Federal Government (2024). Antwort der Bundesregierung auf die Kleine Anfrage der Fraktion der CDU/CSU – Drucksache 20/10030 – Künstliche Intelligenz und die Zukunft der Arbeit. Retrieved from: https://dserver.bundestag.de/btd/20/101/2010198.pdf 

Guenduez, A. A., & Mettler, T. (2023). Strategically constructed narratives on artificial intelligence: What stories are told in governmental artificial intelligence policies?. Government Information Quarterly, 40(1), 101719.

Heikkilä, M. (2023). A Cambridge Analytica-style scandal for AI is coming. MIT Technology Review. Retrieved from: https://www.technologyreview.com/2023/04/25/1072177/a-cambridge-analytica-style-scandal-for-ai-is-coming/ 

Hirsch-Kreinsen, H., & Krokowski, T. (2024). Promises and Myths of Artificial Intelligence. Weizenbaum Journal of the Digital Society, 4(1).

Lamura, E., Lamura, M. (2023). HYPERNUDING? Please No, Touch Me Gently. In:  Jankowski, P., Höfner, A., Hoffmann, M. L., Rohde, F., Rehak, R. & Graf, J. (Eds.) (2023). Shaping Digital Transformation for a Sustainable Society. Contributions from Bits & Bäume. Technische Universität Berlin. https://doi.org/10.14279/depositonce-17526

Lynn, B., von Thun, M., Montoya, K. (2023) AI in the Public Interest: Confronting the Monopoly Threat. Open Markets Institute. Retrieved from: https://www.openmarketsinstitute.org/publications/report-ai-in-the-public-interest-confronting-the-monopoly-threat 

Mager, A., & Katzenbach, C. (2021). Future imaginaries in the making and governing of digital technology: Multiple, contested, commodified. New Media & Society, 23(2), 223-236.

McKinsey Global Institute (2023). The economic potential of generative AI: The next productivity frontier. Retrieved From: https://www.mckinsey.de/~/media/mckinsey/locations/europe%20and%20middle%20east/deutschland/news/presse/2023/2023-06-14%20mgi%20genai%20report%2023/the-economic-potential-of-generative-ai-the-next-productivity-frontier-vf.pdf 

Morozovaite, V. (2023). Hypernudging in the changing European regulatory landscape for digital markets. Policy & Internet, 15(1), 78-99.

Osborne, M. A., & Frey, C. B. (2013). The future of employment. University of Oxford, 17

Ramge, T. (2020). postdigital: Wie wir Künstliche Intelligenz schlauer machen, ohne uns von ihr bevormunden zu lassen. Murmann Verlag; first edition.

Richter, V., Katzenbach, C., & Schäfer, M. S. (2023). Imaginaries of artificial intelligence. In Handbook of Critical Studies of Artificial Intelligence (pp. 209-223). Edward Elgar Publishing. 

Richthofen, G., Siebold, N., Gümüsay A.A. (Febraury 23rd, 2022). The promises and perils of applying AI for social good in entrepreneurship. Retrieved from: https://blogs.lse.ac.uk/businessreview/2022/02/23/the-promises-and-perils-of-applying-ai-for-social-good-in-entrepreneurship/ 

Schwartz, Reva, et al. Towards a standard for identifying and managing bias in artificial intelligence. Vol. 3. US Department of Commerce, National Institute of Standards and Technology, 2022.

Stern, N., Romani, M. (2023) The global growth story of the 21st century: driven by investment and innovation in green technologies and artificial intelligence. London: Grantham Research Institute on Climate Change and the Environment, London School of Economics and Political Science and Systemiq. retrieved from: https://www.lse.ac.uk/granthaminstitute/wp-content/uploads/2023/01/The-global-growth-story-of-the-21st-century-driven-by-investment-in-green-technologies-and-AI.pdf 

Svanberg, M., Li, W., Fleming, M., Goehring, B., & Thompson, N. (2024). Beyond AI Exposure: Which Tasks are Cost-Effective to Automate with Computer Vision?. Available at SSRN 4700751.

Tegmark, M. (2017). Life 3.0 : being human in the age of artificial intelligence (First ed.). New York: Knopf. ISBN 9781101946596. OCLC 973137375.

Weeke, Vanessa (2023). Transparente digitale Verwaltung: Umsetzbarkeit eines KI-Registers in Deutschland
Wirtz, B. W., Weyerer, J. C., & Geyer, C. (2019). Artificial intelligence and the public sector—applications and challenges. International Journal of Public Administration, 42(7), 596-615.

Authors

Florian Ranft

Member of the Management Board and Head of Green New Deal
Florian Ranft is a member of the Management Board and is head of Green New Deal & Progressive Governance at Das Progressive Zentrum.

Sebastian Pieper

Project Manager
Sebastian Pieper works as a project manager with a focus on democratic innovation, state and administrative reform and political strategy. In this role, he is responsible for projects in the thematic area of "Resilient Democracy".

Jonah Schwope

Project Assistant
Jonah is a project assistant in the focus area Resilient Democracy. Previously, he completed his bi-national bachelor's degree in Public Governance across Borders at the University of Münster and the University of Twente. He focuses on unequal power relations in shaping the socio-ecological transformation and is particularly interested in the strategic influencing of norms & discourses. In addition to his studies, he worked at the Chair of Global Environmental Governance at the University of Münster and completed internships at LobbyControl, the LIFE-Initiative e.V. and the European Parliament.

share: