AI and Diplomacy: Addressing the Illusion of Expertise
Share
Less than two months ago, I started exploring the impact that AI is having on international relations and diplomacy. I explored the big picture elements of AI’s evolving role and how its rapid introduction into the diplomacy space might affect the field.
I also looked at how the risk of overuse could lead to a cycle of over reliance that may lead to the prevalence of AI generated output unmitigated by human nuance over the medium to long terms, resulting in deskilling of diplomats and other professionals over time.
I then explored how competition between leading AI titans is underway with a veritable race to enter the markets of the Global South and other markets that do not have the capacity to develop their own AI systems, and how this competition could result in a technological glass ceiling for those countries and their governments. I looked at how this may end up becoming a hinderance rather than a platform to launch from, and how the competitive outlook between the leaders in the field may end up leading to the primacy of AI over its human counterparts in the field.
From there I dove into the issue of regulation, and the difficulties that global institutions will face in creating a regulatory framework for AI, and how the methods and pace of development and adoption would dictate the relevance of any regulatory frameworks developed in multilateral contexts.
I then used the White House AI Memo as an example of how we can expect governmental approaches to AI to look, and how they may be shaped in large part by national security concerns and shaped by mutual distrust between the key players in the AI space (which maybe I should have waited to do since Trump seems likely to repeal or relax the terms of the memo).
What I had not expected is what I have seen recently at the intersection between AI and diplomacy. There is a flood of services offered in the forms of courses and trainings for diplomats and other professionals to train them in the use of AI in their day to day work.
Offered by seemingly everyone and their aunt, these courses barely scratch the surface of what AI can do, while claiming to prepare people to wield these spectacular tools in an immensely complicated field.
This trend signals an evolving landscape of supply and demand in the realm of AI and diplomacy, and is also a precursor to a problem that I had not yet tackled, but one that needs to be nipped in the bud before it grows in proportion and begins having an undesired impact on the entire field.
Lets dive in.
The Illusion of Competence: When AI Expertise Falls Short in Diplomacy
Also known as the Dunning-Kruger effect, the illusion of competence is a cognitive bias that occurs when individuals with a limited understanding of a subject significantly overestimate their expertise. This phenomenon encapsulates the saying, “a little knowledge is a dangerous thing,” and in the context of AI and diplomacy, its potential cumulative repercussions are particularly troublesome. Diplomacy is a nuanced field where decisions can carry long-lasting implications, and the introduction of AI complicates an already complex terrain.
In bureaucracies such as ministries of foreign affairs and intergovernmental organizations, certifications and training credentials are often regarded as indicators of expertise. This creates a fertile ground for individuals to leverage AI-related certifications—regardless of their depth or quality—as shortcuts to career advancement.
These individuals, armed with superficial knowledge but brimming with confidence, may rise to influential roles within organizations increasingly reliant on AI to inform their operations. Whether these roles involve AI-specific tasks or the integration of AI into broader functions, the risks of overconfidence coupled with a lack of genuine expertise are considerable when scaled to organizational levels.
The institutional mechanisms that reward certifications—such as promotions, prestigious postings, or greater influence—further exacerbate the issue. Seeing the success of their peers, others are incentivized to pursue similar certifications, creating a domino effect. With increased rewards within organizations for these certifications, we can expect to see an uptick in demand for them across the board.
This rush for credentials can result in a feedback loop; as credentials signal expertise, the organization could empower those who obtain them, resulting in confirmation biases exacerbated by the Dunning Kruger effect, that in turn reinforces these very same confirmation biases. This can set the stage whereby ill-prepared individuals ascend to key positions, setting precedents that normalize subpar expertise.
The consequences for erroneous decision-making in this environment is profound. Overconfident yet underqualified individuals are unlikely to recognize the limitations of their understanding or the flaws in AI outputs. As a result, organizations could see themselves progressively basing their decisions on inaccurate outputs, and gradually entrenching themselves into disadvantageous positions.
This could lead to systemic issues that may go unnoticed until significant damage has been done. Should these superficial certifications become the benchmark for qualifications, organizations risk fostering broader illusory competence. Leaning on the certifications offered by generic centers may create the impression that the organization -or its personnel- have invested heavily in human resource development, not only resulting in problematic decision making and misjudgments, but also impeding progress toward actual capacity building by creating the impression that the capacity and skills are already in place.
Entire teams may fail to identify critical problems, such as flawed AI models, biased training data, or outputs misaligned with the organization’s goals. Over time, this can erode institutional coherence, and undermine operational effectiveness, leading to encumbered processes that are unaware of their inefficiencies.
Tailored AI Training: Meeting Specific Needs
There is an undeniable need for effective AI training in diplomacy and international relations. However, the quick-fix courses that currently permeate the market fail to address the depth and specificity required to meet this need. Diplomacy is a field defined by its complexity, and the integration of AI into its practices demands tailored programs that align with the operational realities of each organization.
Training programs should not be limited to teaching simple prompt engineering or providing surface-level overviews of AI systems. Instead, organizations operating within the sphere of international relations must undertake a deliberate assessment of their specific contextual requirements. By identifying these needs, institutions can ensure that training and certifications are aligned with their strategic objectives and operational workflows.
A comprehensive, organization-wide evaluation is a critical first step. Such an evaluation would determine how AI can best augment and support the institution’s work, identifying the processes where AI integration would be most effective. This assessment would also clarify the specific skills required of individuals who will oversee these responsibilities, establishing a framework for the development of relevant training programs.
Generic, one-size-fits-all training programs will not meet these needs. Without rigorous, agency-specific parameters defining what constitutes acceptable and accredited training, the proliferation of superficial courses risks undermining institutional objectives. This trend primarily benefits training centers and course providers, who capitalize on the growing demand by offering quick-fix solutions. While these programs may provide users with basic tools, they rarely equip participants with the depth of understanding necessary to be considered reliable experts within their organizations.
The requirements of ministries of foreign affairs and intergovernmental organizations are particularly intricate and interconnected. Ad hoc adoption of AI models by individuals who pursue external training without institutional oversight cannot be expected to meet these demands. Moreover, information security is a paramount concern in international relations, making it essential to regulate the use of AI in professional contexts. This requires a clear distinction between individual use of AI and organizationally sanctioned applications. AI systems integrated into institutional workflows must be tailored to the organization’s needs, and external training that does not address these specific models should not be recognized as relevant expertise.
Establishing AI Standards & Ensuring Relevance
To address the challenges posed by illusory competence, establishing enforceable standards for AI training within diplomatic organizations is not just desirable - it is essential. Such standards must be built on collaboration between the organizations themselves and vetted institutions that can offer real tailored solutions, ensuring that training programs are specifically designed to meet an institution's contextual requirements.
Practical application must be at the heart of these training programs. Theoretical knowledge alone is insufficient in a field as dynamic as diplomacy; understanding general prompt engineering when addressing specific portfolios of national, regional or even global scope cannot effectively lean on a one-week course offered by a nameless training center.
Training should include robust assessments of participants' ability to use AI in real-world scenarios, preparing them to navigate the complexities of diplomatic engagements with confidence and competence. Furthermore, these programs must teach participants how to identify and resolve issues related to AI programming and outputs. Ensuring that AI-generated outcomes align with an institution’s overarching processes, information security, and goals is critical to the successful integration of these technologies.
Interdisciplinary collaboration should also be emphasized. Rather than encouraging non experts to take a few courses, AI experts should be integrated into the machinery of organizations operating in this sphere; bridging the gap between technical proficiency and diplomatic expertise cannot be left to one to the exclusion of the other. This collaborative approach can help organizations avoid over-reliance on certifications as markers of competence, focusing instead on demonstrable skills, real technical expertise, and practical application. Such measures will enable organizations to prioritize depth and relevance over superficial qualifications, ensuring that AI is a tool for empowerment rather than a source of disruption.
Token Credentials: Status Over Substance
The rising demand for AI certifications has created fertile ground for the proliferation of superficial training programs, resulting in what can be described as token credentials. Certifications like the ones we are discussing do not necessarily denote any real expertise, nevertheless, browsing briefly through social media would give the impression that they are the definers of expertise in the field.
They are being touted left and right by users across professional social media (think LinkedIn), increasingly becoming status symbols, driving individuals to pursue them as tokens of professional advancement rather than tools for meaningful development.
This phenomenon, driven in part by competition and the fear of being left behind or becoming irrelevant, encourages a superficial approach to AI adoption. As individuals increasingly seek ways to address the gaps in expertise left open by their organizations, this trend is unlikely to slow down unless organizations take deliberate steps to institute their own sets of standards.
Relying on a veneer of competence provided by surface-level understanding offered through short courses and trainings may lead organizations to inaccurate assessments of their own readiness and ability to integrate and wield the technology effectively. This may actually result in organizations that over time become both more reliant upon and more vulnerable to misuse of AI, as they lack the resilience and depth needed to navigate the complexities of AI-driven systems.
Ultimately, this trend may end up undermining the very objectives these certifications purportedly aim to achieve, creating a workforce that may be ill-prepared to address the challenges and opportunities AI presents, all while lacking awareness of this potential shortcoming.
Resisting the Lure of Quick Fixes in AI Integration
Despite the clear and mounting risks associated with superficial AI training and the illusion of competence, bureaucracies are likely to persist in these mistakes. The allure of quick-fix solutions, token credentials, and surface-level understanding fits neatly into the structural tendencies of large organizations: a preference for visible, immediate gains over long-term investments in substantive expertise.
Appearing to take action, or to encourage it on the part of individual staff members serves to deflect accusations of being unprepared or left behind. It helps maintain the illusion of action and solution oriented approaches to a dynamic and unpredictable challenge.
The systemic reliance on certifications as a marker of qualification creates a feedback loop that reinforces itself. As individuals use these credentials to climb organizational ladders, their perceived success incentivizes others to follow suit, further entrenching the illusion of competence. Institutions then normalize subpar standards, mistaking the proliferation of certificates for genuine capacity building. This not only compromises decision-making but also erodes organizational coherence, as flawed AI outputs and misaligned strategies go unrecognized and unaddressed.
Efforts to set rigorous standards and establish tailored training programs that align with institutional needs are essential, but the inertia of bureaucracy often resists such change. Token credentials will continue to dominate unless deliberate actions are taken to prioritize depth over appearance. Without these measures, ministries of foreign affairs and intergovernmental organizations may find themselves unwittingly perpetuating the very vulnerabilities they aim to address. By adopting internally coherent parameters and stringent standards, organizations can reduce their vulnerability, and preemptively filter out and identify relevant expertise.
Bureaucracies often prioritize optics and immediate results over sustainable growth. Sustainable growth is less useful as a tool for self-promotion, and in dynamic environments like those within ministries of foreign affairs and intergovernmental organizations this trend may be exacerbated. Institutions with prevalent office rivalries can expect mutual accusations of delayed action, prompting decisions geared toward achieving the appearance of quick results, even if they are inherently ineffective.
This tendency, exacerbated by the competitive dynamics of professional advancement, means that the warning signs—however clear they may be—are unlikely to deter the current trajectory. The risk is not just the adoption of flawed practices but the normalization of these practices to the extent that they become entrenched in the institutional culture.
In this evolving field of AI and diplomacy, the stakes are too high for complacency. While the path forward is fraught with challenges, the responsibility to resist the temptation of shallow fixes rests with the stewards of these institutions. Whether they will rise to the occasion or fall prey to the easy, immediate gains of token expertise remains to be seen.