United for Now: Navigating AI Collaboration and Rivalry

United for Now: Navigating AI Collaboration and Rivalry
Mohammed Elsoukkary |

Share

Artificial intelligence is making significant headway in the realm of intergovernmental relations. It has rapidly become one of the most important loci of discussion at national, regional, and international levels, as governments and intergovernmental organizations scramble to understand and influence the development of this technology.

Since the adoption of the UN General Assembly Resolution on safe and trustworthy artificial intelligence in March 2024, proposed by the US, followed by the General Assembly resolution on capacity building of artificial intelligence proposed by China in July, the global stage has seen a surge in activity centered on AI. Workshops, meetings, and summits have been convened, with governments and intergovernmental bodies across the world adding their perspectives to the evolving puzzle of how to manage and direct the future of AI development.

The Evolving AI Landscape

Shortly after the adoption of the General Assembly Resolution in March, the fourth International Telecommunication Union’s (ITU) AI Summit for Good, held in June 2024, brought together key players from the public and private sectors in the global AI industry, including Geoffrey Hinton, often referred to as the godfather of AI. The summit featured prominent speakers from key players in the AI space and facilitated valuable dialogue between the private, public, and intergovernmental sectors.

In July, the African Union (AU) endorsed its Continental Artificial Intelligence Strategy, which emphasized the need to bridge digital divides, address risks inherent in AI, and upgrade member states' infrastructure to support AI development and effective integration within their operational systems. The strategy called for collaborative partnerships and international cooperation to promote development through capacity building while ensuring safety.

The first week of September saw the UN China Joint Workshop on capacity building, held in Shanghai, which highlighted the need for international collaboration to foster equitable access to AI through capacity building and talent development.

The following week, the G20 issued its Maceio Ministerial Declaration on Digital Inclusion for All, which stressed the importance of the safe development of AI and its potential role in achieving sustainable development. The declaration featured an annex that focused on ‘enabling resources for the development, deployment, and use of AI for good and for all,’ while acknowledging existing divisions and gaps and the necessity of international collaboration to reduce them.

Shortly thereafter, the UN High-Level Advisory Body on AI published its final report, “Governing AI for Humanity,” which presented a series of recommendations gathered from over a year of consultations with experts in the field of AI. These include establishing an independent international panel on AI, holding twice-yearly multi-stakeholder policy dialogues, and creating frameworks for AI standards, capacity building, and global AI governance.

On the eve of the UN General Assembly’s High-Level Week, Singapore and Rwanda published the AI Playbook for Small States. This publication, leveraging feedback from members of the Forum of Small States, identified key challenges particular to this group of countries, such as limited resources, access to data, and talent gaps, necessitating a strategic and tailored approach to AI development and integration and cooperation.

On the margins of the High Level Week, the US announced the Partnership for Global Inclusivity on AI, as well as its AI Global Research Agenda and the AI in Global Development Playbook. Secretary of State Blinken, along with eight tech juggernauts - Amazon, Anthropic, Google, IBM, Meta, Microsoft, Nvidia, and OpenAI- announced their commitment of 100 million dollars to unlock AI as a tool for sustainable development. They identified three key areas of focus namely, increasing access to AI models and compute, capacity building, and expanding local datasets.

The Summit of the Future, held on the twenty second and twenty third of September adopted the Pact for the Future, featuring the Global Digital Compact (GDC) as the first comprehensive global multilateral framework for digital cooperation. The GDC aims to close digital divides, promote a safe digital space, and advance governance approaches to emerging technologies, all guided by the principles of the charter of the UN.

On the 25th of September, China and Zambia cohosted the High-level Meeting on International Cooperation on Capacity-Building of Artificial Intelligence, geared towards the implementation of the July General Assembly resolution on capacity building. The meeting explored mechanisms to prevent the emergence of an AI gap between developed and developing countries. Representatives from the Global South expressed concerns over the widening divide, their inclusion in the AI development process and, the potential impacts on sustainable development.

The series of meetings and events underscores the growing focus of governments across the world on identifying the best ways forward for their pursuit of AI development, as well as improving mechanisms of cooperation on the development of the technology.

The workshops, summits, and meetings held since March 2024 revealed several key trends that have woven their way through the intergovernmental approach to AI. These common themes will likely form the foundation of future engagement mechanisms between various actors in the AI space, both in the public and private sectors.

One of the most notable trends is governments’ growing acknowledgment that they are struggling to keep up with the pace of AI development. Corporations, not governments, are setting the speed of progress, and governments are grappling with the realization that they not only have little ability to steer the direction of AI’s development, but also that they may soon lose the ability to reign it in.

As a result, regulation and governance have become focal points of discussion on national and international levels. Across various forums, summits, and workshops held over the course of this year, the challenge of governing a technology that remains largely misunderstood has emerged as a universal concern. Governments and intergovernmental bodies are increasingly calling for an international, interoperable framework that provides tools to regulate AI effectively and mitigate its potential misuse.

Disparities in access to AI development have also been a recurring theme. Echoing the long-standing digital divide, access to AI development resources and talent remains concentrated in the Global North, and even within that group of countries, powerful corporations dominate the field. Discussions on the international stage have repeatedly stressed the need for equitable access to AI and for the transfer of technology, expertise, and capacity-building initiatives to ensure that no one gets left out of a share in AI’s benefits.

In conjunction with these concerns, there is an emerging debate around the imposition of cultural biases through AI systems, as developers' biases can seep into AI systems, amplifying cultural and linguistic disparities. This prompted the repeated calls for broader representation during the development stages of AI systems to mitigate these issues and ensure greater cultural inclusivity, and language inclusion with the models, and to alleviate potential cultural disparities.

Across all these discussions, the need for international cooperation is repeatedly emphasized; governments, particularly from the Global South, have raised concerns over the lack of transparency in AI development, fearing that they are being left behind as a growing AI divide emerges on the international arena.

However, despite the genuine concerns raised in these discussions, and while the proposed mechanisms offer possible paths forward, engagement on the intergovernmental level necessitates further intensive engagement on the issue to develop implementable and actionable solutions. Initiatives bringing together the public and private sectors represent a step in the right direction, however the need to address key challenges in engagement, such as state interests, remain.

Collaboration, Competition, and State Interests

As illustrated by the outcomes of the various meetings and conferences, global discussions on AI often emphasize collaboration, portraying an ideal of positive engagement for the collective good. Yet, the underlying competition between states—present in other arenas—is rarely acknowledged openly, as if AI governance is expected to somehow operate outside these realities.

While leading AI nations like the US and China have made commitments to engage constructively with the Global South to address the needs of these countries in the field of AI, there is an inherent imbalance in the equation as it is portrayed on the international stage. The Global South, in this scenario, stands to gain access to AI technologies and expertise they might otherwise lack, while providers of the technology are being asked to relinquish their technological advantages without substantial returns. This unlikely to unfold; governments seldom give up their advantages without adequate returns or compensation, particularly when it comes to difficult to develop technologies.

The field of AI may witness, as engagements begin to dive into the substantial and technical aspects of cooperation beyond the slogans, a repeat of the dynamics witnessed in the field of cybersecurity. In that field, the positions of the technologically advanced countries and those in need of support was near unsurmountable. The former frequently refused to relinquish their technological advantages, while the latter were demanding increased access to the technology.

If the self-interest of technologically advanced states is not acknowledged, AI cooperation may follow a similar path, producing vague outcomes rather than concrete solutions. Framing the conversation solely around collaboration risks stifling honest discussions about state interests. Public commitments to cooperation may mask private, bilateral negotiations where the power disparity between parties is even more pronounced. Addressing these imbalances through transparent, multilateral dialogue could lead to more productive outcomes.

Corporations are leading the charge in AI development, significantly outpacing governments and the regulatory frameworks meant to guide them. This means that the interests of corporate actors need to be acknowledged and addressed, infusing an additional layer of complexity into the equation. Even if advanced nations were genuinely committed to sharing AI technology to bridge the digital divide, they might face resistance from corporations whose primary focus remains profit, not altruism. Expecting companies to act against their fundamental drive for competitive advantage, in favor of equitable access to AI, may be an unrealistic assumption.

In fact, current trends show that corporations are pulling talent inward, creating competitive environments where expertise is hoarded rather than shared. While global tech giants like Google and Microsoft have publicly committed millions of dollars to support AI initiatives in the Global South, the underlying motivation is still to stay ahead of the competition.

Given the incentives of self-interest, it might seem counterintuitive for corporations and advanced states to engage with less technologically developed partners in AI to provide them with support in integration, system adoption and capacity building. Yet, the global stage has witnessed a surge of such engagements. As explored in Precipice of Artificial Hegemony, getting a “foot in the door” by introducing AI systems early provides a significant competitive edge.

A comparison of recent US and Chinese initiatives sheds light on differing approaches. The US Partnership for Global Inclusivity on AI, led by Secretary of State Blinken, outlined specific priorities and focus areas, presenting itself as a guide for others in the use of AI. In contrast, China, co-hosting a High-Level Meeting on Capacity-Building with Zambia, launched its AI Capacity Building Action Plan, focusing on a more collective and inclusive approach to AI cooperation.

The situation calls for more candid dialogue, with due recognition of the fact that both public and private actors will ultimately prioritize their self-interest. This is essential if governments are to reach tangible outcomes, given that having a clearer understanding of what each party stands to gain from AI cooperation can pave the way for more productive negotiations, where the interests of all stakeholders are addressed in a more transparent manner.

Strategic Cooperation in a Competitive AI Landscape

AI for Good Summit, Geneva 2024

As the competition between actors in the Global North accelerates in the rush to dominate AI development, the Global South finds itself at a stark disadvantage. Already trailing behind on ICT and other digital technologies, and with a present and growing gap in AI, there is a sense of urgency to effectively and rapidly engage with development partners to avoid further lag.

Extended discussions with intangible or vague outcomes are not in the interest of developing countries. In the race against time, every delay is another step in the widening berth that is the AI divide, and therefore while outcome documents emphasizing the need for further cooperation may help move the needle in international collaboration, this may not be enough to secure the support needed for faster development.

Approaching AI engagement as a negotiation, rather than through a purely collaborative lens, may better serve these nations’ long-term goals. By acknowledging that state interests serve as an important driver of behavior, cooperation -effective and tangible cooperation- on AI can leverage it to accelerate the process of engagement. In this scenario, countries of the Global South must accept that there will be an exchange of interests, whether or not it is openly acknowledged. With negotiation, this exchange can be more transparent, providing greater clarity on the trade-offs involved.

Through bringing the interests of all parties to the table, rather than framing the engagement on the international scene as a collaborative effort for the good of all, and leveraging the greater resultant transparency, governments would be able to compare between partners and what they have to offer in a more informed capacity.

Documents like the African Union Strategy on AI and the AI Playbook for Small States provide starting points for engagement, but the Global South must remain wary of the dynamics at play. The likelihood that corporations or governments from the Global North will voluntarily surrender their competitive advantages is slim, and the sooner the Global South recognizes this reality, the better positioned they will be to negotiate for better terms.

If developing nations fail to address these dynamics, the discussions will feature reduced transparency and the Global South may find itself in a position of perpetual reliance on external support to keep up with developing technology, and once again remain stuck in a cycle of dependency that is difficult to detach from. A key outcome for these governments is therefore to avoid becoming perpetual customers of AI technology, reliant on external support while their own development lags. Similarly, they must guard against the "brain drain" effect that has plagued other sectors, as shared capacity-building initiatives often come with limitations, preventing full technological independence.

Through addressing and openly acknowledging interests, governments will be more capable of engaging in fruitful, realistic debates that do not aspire to difficult to attain modes of engagement that rely on the goodwill of parties surrendering their advantages for no tangible returns.

Enhancing Bureaucratic Agility for the AI Era

Since the first UN General Assembly resolution on AI in March 2024, the number of international AI engagements has surged, culminating in parallel US and Chinese initiatives and the adoption of the Pact for the Future and the Global Digital Compact (GDC) at the Summit of the Future. Yet, despite this flurry of activity, the pace of these bureaucratic processes is struggling to keep up with the rapid advancement of AI technology.

As AI systems become more integrated into government operations, defense systems, and autonomous decision-making processes, the stakes are higher than ever. As Geoffrey Hinton – the godfather of AI- noted, AI’s full potential remains beyond our current understanding, and yet we are racing to adopt and integrate into systems that shape global dynamics and engagement. At the same time, the regulatory environment and the security architectures in place seem ill-equipped to manage the speed and complexity of AI’s evolution.

At this stage, with the welcome recommendations of the Advisory Body’s report, and the priorities set forth over the last six months of international engagement taking shape, there is now a need to infuse the process on the international stage with further urgency and agility. Leveraging the proposed solutions as a starting point, some further out of the box thinking to implement rapid solutions may help provide further support to accelerate global engagement on AI.

There are numerous untapped potential mechanisms that could be implemented, such as a crowdsourcing solution like a decentralized task exchange network platform, where specific challenges can be posted and others can bid to solve and post them. Bringing together governments, corporations and academic institutions, this approach could reduce duplication of efforts, enhance direct engagement and identify trends in challenges and result in the organic exchange of expertise across the world.

On the other side of the coin, and given the potentially disruptive nature of AI, it is crucial to consider worst-case scenarios as a balance to the positive outlooks and prospects for collaboration. To this end, a dynamic mechanism, leveraging existing structures and platforms within the ITU for example could be an effective tool to wield predictive analysis alert systems to flag evolving risks associated with the technology as it grows. These would ideally encompass not only technical aspects, but minds from various fields of expertise to identify the broad scope of threats that could arise from the misuse of the technology.

The Path Forward

The rapid pace of AI development and its integration into global systems presents both an unprecedented opportunity and a daunting challenge. Governments and intergovernmental organizations must navigate the fine balance between collaboration and competition, recognizing that while shared technological advancement is essential, state and corporate interests will remain primary drivers of behavior. To bridge the growing AI divide and foster equitable access, a new paradigm of engagement is required—one that embraces transparency, recognizes self-interest, and leverages the expertise of diverse stakeholders.

The success of AI governance will depend on the willingness of states and corporations to engage in candid dialogue, confront the realities of competition, and adopt agile, innovative solutions to manage the complexities of AI development. Initiatives like the Global Digital Compact and the recommendations of the UN’s High-Level Advisory Body on AI offer promising starting points, but the international community must act swiftly to implement mechanisms that can keep pace with the technology itself.

As we stand at the precipice of the next era in AI-driven intergovernmental relations, the question looms large: Will cooperation prevail to ensure AI serves humanity, or will competition and self-interest dictate its course? The path forward lies not in lofty ideals, but pragmatic, actionable solutions that balance collaboration with the realities of a competitive global landscape.

Comments ()