An AI Dilemma: Balancing Innovation, Cooperation and Regulation

An AI Dilemma: Balancing Innovation, Cooperation and Regulation
Mohammed Elsoukkary |

Share

How can global institutions keep pace with the rapid evolution of AI while ensuring safety and effective international cooperation? The speed of AI's evolution leaves global governance grappling to keep up, posing challenges that could reshape international dynamics.

Over the past couple of weeks, I have focused a lot of my research on the topic of AI and its increasingly prominent role in the global relations space. We looked at how, with the adoption of the UN General Assembly Resolution on AI in March 2024, the topic became a fixture in multilateral discussions in global forums. It featured prominently in the Summit of the Future held on the margins of the UN High Level Week and was the topic of discussion at the High-Level Meeting On International Cooperation on Capacity Building of Artificial Intelligence.

We looked at how AI is being approached by governments and intergovernmental organizations, how its misuse could lead to deteriorating ecosystems of engagement and how a competitive rather than collaborative approach could shape the international AI landscape.

I felt however that I reached a point where I needed to dive deeper into the realities of the field. I wanted to know the perspectives from the point of view of those pursuing the development of the technology, and how those outlooks could inform discussions on the topic and provide actionable steps in developing the necessary frameworks for safe and reliable adoption of AI by governments and intergovernmental organizations.

To do so, I reached out to Mohab El Hilaly, Partner and GenAI Quebec Leader at IBM to help me understand some of the more intricate aspects of the topics that we have been exploring over the past two weeks, and he offered crucial insights, nuance, and perspective, accrued from his extensive experience in the field.

Cooperation Frameworks

If you recall, we previously looked at the scramble to develop regulatory frameworks to govern the use of AI and provide a framework for international cooperation, and how these efforts manifested in the adoption of two resolutions, and featured as key items of discussion at the Summit of the Future. In those previous pieces, we also raised the issue of misalignment of pacing between the rate of development of AI and the development of the related regulatory and cooperation frameworks.

Through the enriching discussion I had with Mohab this morning, I was better able to understand the magnitude of this divergence. At one end of the equation, you have intergovernmental organizations discussing annual surveying, reporting, and conferences and quarterly research output, and on the other hand you have Large Language Model (LLM) development being measured on a weekly scale.

When discussions and reports on AI engage annually, biannually, or even quarterly, gaps emerge.  These delays risk making recommendations outdated. True, it is unreasonable to expect senior officials and decision makers to meet every other week just to remain informed on AI model development. However, it is also unreasonable to expect a body that meets once a year to make informed decisions on an issue that witnesses anywhere between 17 to 26 iterations in the same period, especially without supplementary mechanisms that pace themselves in line with the speed of its development.  

The dedicated mechanisms recommended by the AI Advisory Body in its report ‘Governing AI for Humanity’, and other similar mechanisms recommended elsewhere, should be calibrated to the pace of development of the technology and provide a continuous stream of updates that inform decision making bodies. The necessity of enhancing organizational agility in approaching AI stems not only from the need to remain abreast of technological advances, but also to serve as early warning mechanisms to flag any outstanding developments immediately and recommend responses.

With a technology that sees computing power multiplying in scope and magnitude on a regular basis, waiting on rigid bureaucratic processes to develop responses may result in irrelevance of responses due to delays. The rapid pace of AI evolution requires agile mechanisms that keep decision-makers informed in real-time, rather than relying solely on infrequent reports and meetings.

Governance and Regulation

In the same vein, the development of regulatory and governance frameworks remains lagging behind the technology. While Some countries and blocs have already issued regulatory documents, like the EU Artificial intelligence Act or the US Executive Order on Safe Secure and Trustworthy AI, others like the UK have held back on legislation to avoid limiting industry growth and innovation. 

This lag and inconsistency between regulatory and governance approaches, itself an outcome of the slow pace of engagement within international multilateral frameworks, could result in a patchwork of legislations across different jurisdictions that would then take more effort and negotiations to align later on than it would if a concerted effort were to be exerted at these earlier stages to negotiate a common framework.

There is also a risk that this disparate patchwork approach could foster the establishment of AI ‘havens’ -with comparatively lax regulation- which could potentially provide a breeding ground for the development of unregulated AI that then impacts the global AI ecosystem.

To limit the potentiality of either an unregulated or a dysregulated international AI ecosystem, and move beyond the voluntary commitments of leading AI development companies to manage the risks of AI, there is a pressing need to develop rapid, agile, adaptable and responsive mechanisms through multilateral forums to provide a foundation for international cooperation and governance frameworks on AI.

Building on the role of the mechanisms suggested earlier, the flow of information from them could flow to regulatory and legislative bodies in parallel to decision making authorities, which affords a dual purpose to the mechanisms, reduces duplication of effort, and aligns the information available to both decision makers and legislators to provide consistency of focus between them.

Adoption and Integration

On the operational side, the issue of integrating the technology into governmental and intergovernmental organizations, and how to make the best use of the technology to produce the desired results and outcomes has featured prominently.

Ministries of foreign affairs and intergovernmental organizations, have particular considerations when they adopt new technology, particularly if it may have access to information and workflows within the organization. Issues like the retention of institutional memory, continuity of work, and security of information all factor into the equation.

There are also concerns on matters of accuracy of output, particularly outputs related to safety and security in volatile regions, or the achievement of sustainable development goals, requiring a high level of precision to provide informed decision making in a dynamic environment.

To this end, Mohab was invaluable in helping me understand some of the issues on this topic that I have been wrangling with. He clarified several important elements that organizations need to take into consideration when adopting the technology, including the importance of developing the skills of personnel engaged directly with the technology in order to understand its potentials and its limitations.

In developing these skills, whether through training programs within the institutions themselves or in partnership with providers, recipient organizations need to adopt a dynamic and deliberate mindset. They need to remain aware of the differences between methodologies of use and engagement with the technology, and at the same time retain oversight over the process to ensure accuracy and relevance of output.

Reskilling versus Deskilling

AI's role in diplomacy presents a risk: the deskilling of diplomats. Overreliance on the technology could erode essential communication and negotiation skills. It therefore falls back on the organizations that adopt the technology to ensure that their people continue to interact and develop their skills and use AI to supplement this process rather than replace them.

In this aspect, I learned through our discussion that wielding AI effectively comes about through a combination of training and experimentation with prompt tuning. Relying on one without the other would not provide the optimal required output.

Training can provide a good foundation for the use of the technology at the early stages of its adoption, but without sufficient engagement and experimentation with the software as it pertains to the specific requirements of the organization, the results will remain subpar. On the other hand, experimentation without sufficient training extends the timeline for streamlining the use of the technology beyond what it needs to be, and can result in ineffective use of the technology, reducing the return on investment.

From this, we can deduce that a dual track approach would optimize safe and efficient integration of the technology within organizational systems. The first track is training, particularly on the scope of use, access to information and institutional data (with emphasis on protection of classified documents and information through compartmentalization), and proper contextualization. The second track is the freedom to experiment with and tune the prompts for better results, analyze the accuracy of outcomes and the effectiveness of AI in supporting the targets of the organization.

The risk of replacement of personnel when it comes to diplomacy may not be as stark as in other sectors, but the risk of deskilling remains present. The aggregate effect of overreliance on AI as a substitute for direct interactions in the field presents a potential problem that could adversely impact relations between states on the medium to long terms.

Diplomats and other professionals in the international relations industry tend to accrue their expertise iteratively through interactions, through which they develop the nuanced interpretive skills that allow them to communicate effectively across borders and cultures. The loss of those skills and reducing interactions to machine engagements dictated by AI generated recommendations could result in an even more adversarial global arena rife with misunderstanding and miscommunication.

It falls upon ministries of foreign affairs and intergovernmental organizations therefore to be very calculated in their approaches to integration of the technology and achieve a balance that reduces the risks of overreliance, while ensuring that the return on investment in the technology is not lost to under use, a risk flagged by the AI Advisory body in its final report mentioned above.

Accountability

a wooden gaven sitting on top of a white counter

From the issue of skills and the use of AI within organizations, the next question was that of accountability. Given the critical nature of many of the decisions that ministries of foreign affairs and intergovernmental organizations make, and their impact on the international stage, the question of accountability is a very important one when it comes to adoption of AI; without proper human oversight, an AI recommendation could inadvertently escalate a diplomatic crisis.

When it comes to ensuring accountability for decisions made through the use of AI, particularly those with unexplainable processes (how the outcome or recommendation was reached), including decisions that impact human rights, development targets, or safety and security of people, the question of assigning responsibility is an important one. 

In this respect, a mitigation measure that could ensure accountability while ensuring relevance of output is the enforcement by organizations of the presence of a human in the loop of AI output. In addition to relevance, it also provides supervision on the process, reviews of the outcome, and maintains a chain of accountability for decisions and recommendations.

Even with the advent of AI Agentic workflows, which is the use of AI agent teams to perform a series of tasks, that Mohab clarified have been seen to reduce AI ‘hallucinations’ by up to 80%, it is important for organizations of this nature to ensure the presence of a human in the loop to reduce the potential for lack of accountability through assigning overarching responsibilities to AI (the issue of overreliance).

This concept of maintaining a human in the loop of AI processing and supervising the resultant outputs supports the need for reskilling, and feeds back into the necessity of augmenting the skills rather than replacing the people within these organizations.

Information Security

a rectangular cellular device

When it comes to organizations operating in the diplomatic and intergovernmental spheres, the issues of confidentiality and information classification is always an area of concern. I raised the question of whether it would be more practical for organizations -governmental or intergovernmental- to commission the creation of their own AI systems exclusive to them to ensure the security of their more sensitive data and workflows.

He very gently explained to me, a near luddite in comparison to his expertise with the technology, that building proprietary AI models is resource-intensive and sometimes impractical and needs to be taken on a case-by-case basis. By the time a custom model is trained, a newer version is already in development. As an alternative, organizations could collaborate with tech developers to stay updated and ensure security through robust protocols.

Through compartmentalization of access to data flows and information through rigidly enforced organizational protocols, informational security can remain protected while affording the use of AI to augment the ability of the organization to achieve its targets. Through consistent training and engagement with the technology, fine tuning the use through experimentation and ensuring the separation of sensitive information from information accessible to the technology, a balance can be achieved between security and effective use of AI.

Takeaways

There is a lot to digest, and with what I have learned today I realize that the process of integrating this rapidly evolving technology into the global bureausphere is layered with complexities that require harmonizing approaches across a number of different contexts to ensure that the adoption of AI works in our collective favor. 

In that direction, there are some takeaways that came about from this conversation, starting with the reality of the gap between the pace of development of AI and the relatively slower pace of development of international cooperation and legislative frameworks.

Organizations, both governmental and intergovernmental need to enhance their agility and wield the mechanisms they have at their disposal, as well as the ones recommended by the AI Advisory Body more effectively. In their current formats, these mechanisms will find it difficult to maintain relevance in their decisions, outcomes, and recommendations when it comes to AI given the disparity in pacing.

Similarly, intergovernmental approaches to governance and regulation need to adopt a more agile mode of thinking to ensure continued relevance, and to avoid the rise of a legislative patchwork with gross inconsistencies across the international arena that could feature gaps or havens that adversely impact the entire ecosystem through unregulated development of the technology.

On the operational side, governmental and intergovernmental organizations need to develop clarity of purpose when adopting AI, and establish mechanisms to balance between ensuring confidentiality, security of information and effective use of the technology to achieve a reasonable return on their investments.

They must also proactively engage in training, experimentation and skill development of their personnel, to ensure safe use and useful outputs. In that respect, they must also ensure that chains of accountability remain solid, and avoid the temptation of overreliance on AI, even as its processing power and capabilities increase. Overreliance can lead to progressive deskilling of personnel over time and the loss of key cross cultural and negotiation skills that are integral to diplomacy and international relations.

The insights gained from this conversation underscore a critical truth: effective AI integration requires a blend of agility, collaboration, and human oversight. As AI reshapes global dynamics, the challenge lies in harnessing its potential while maintaining the human elements at the core of global diplomacy.

Comments ()