AI summit does not achieve consensus on risk management
The Paris AI Summit aimed to open a new phase in the governance of advanced AI systems, but concerns remain that rapid development of AI could enable corruption or create serious preventable harm.
By curating the language and imagery that describe our reality, AI systems can skew the overall body of evidence, degrade understanding, and even cause dangerous decisions to be made. Depletion of critical analysis capacity creates the additional risk that wrongdoing might be harder to uncover. Autonomous armed drones are an obvious concern, but so are chatbots that provide medical advice (even if they come with fine print saying they are not offering advice).
The two-day Artificial Intelligence Action Summit examined some of the major questions about the future of work, the future of human rights, the future of governance and yes, the future of business. The Summit aimed to identify major risks, prevention and containment strategies, and pathways to the establishment of “public interest artificial intelligence”.
Summit preparatory materials warn:
The current trajectory of artificial intelligence development will result in three major issues:
Increased inequality between those who control and those who use artificial intelligence;
Progress made in AI concentrated in a small circle of private actors, jeopardizing both the diversity of actors involved but also the sovereignty of countries that do not have any leverage in this critical technology;
Missed opportunities to resolve key social problems (such as fighting cancer) because of the fragmentation of public interest artificial intelligence initiatives and scarce data.
In November 2023, the Bletchley Park Summit resulted in The Bletchley Declaration, in which nations committed to engage in cooperative and preventative measures to inform national policy, research, and technology deployment, to address “frontier AI risk” and “to ensure human-centric, trustworthy and responsible AI that is safe, and supports the good of all through existing international fora and other relevant initiatives.

A key element of that agreement was the aim of:
“identifying AI safety risks of shared concern, building a shared scientific and evidence-based understanding of these risks, and sustaining that understanding as capabilities continue to increase, in the context of a wider global approach to understanding the impact of AI in our societies.”
The Paris AI Summit faces 15 months of further rapid spread of AI services for businesses and consumers. Some of the pressing questions about AI safety have to do with informational integrity, data sovereignty, and the right of people to earn a living in relation to skilled work. For instance:
How did leading AI platforms develop their core technologies? What methods were used, and what legal standards might be in conflict with those methods?
Are AI services designed to highly trained, skilled professionals like writers, reporters, doctors, lawyers, and engineers?
What are ethical implications of such mimicry? Are end users potentially subjected to reputational, financial, or even physical risk?
What are the economic implications for affected professionals?
What kind of dangers might take expose people to who have no knowledge that AI intervened in their life?
Beyond this, there are grave concerns about the use of AI systems to develop new chemical and biological weapons. The Bletchley Declaration also recognized the risk of advanced AI systems making decisions that are not aligned with human intent or with human standards, including ethical standards, human rights, rule of law, and security protocols.
Among the major risks discussed in Paris was the lack of a comprehensive, global governance architecture for development of new AI systems.
Even so, the Bletchley and Paris summits both cast the need for identifying and preventing major risks as one facet of an overall project to build advanced AI systems and to deploy them widely. Critical reading of outcomes and discussion materials from both summits raises questions about whether sufficient attention has been put to material prevention and containment of frontier risks, in light of calls for widespread deployment of AI systems in both private-sector and government systems.
There are potentially existential questions for the future of democracy and human freedom, as AI systems spread into more areas of everyday human interaction. For instance: How can we assert our right to enact strict safety standards affecting all of the professions that affect our health and well-being, safety, security, and opportunity, while trillions of dollars pour into ventures aimed at putting imperfect AI systems into use in those professions?
One simple answer, which the Paris Summit did not address, would be to hold AI service providers legally liable for any harm resulting from the use of erroneous information provided by their systems.
The Summit produced a Charter on Artificial Intelligence in the Public Interest, which recognized that “artificial intelligence should not be developed and deployed in areas where it is incompatible with international human rights law.” The Charter also commits “to prevent and mitigate individual and collective harms, risks, threats and violations caused by the use and abuse of AI.”
The Charter also puts forward three principles that should shape development of AI systems in line with the recognition of rights, safety, and the general wellbeing of humankind. Those principles are openness, accountability, and participation—aiming to ensure inputs and system design consider risks and vulnerabilities across a broad spectrum of human experiences, and to guard against abusive practices.
U.S. Vice President J.D. Vance—a close personal ally of Peter Thiel (owner of the major U.S. Defense contractor Palantir) and other Silicon Valley investors with major interest in AI—declined to agree to the Paris Summit’s joint declaration, warning against “excessive regulation”.
This comes even as American democracy seems to be undergoing a kind of corporate takeover of public agencies by Elon Musk, an AI-interested mega-billionaire. Observers cite the risk of corrupt interests undermining AI safety, and even of AI systems being used to further corruption of public agencies.
Transparency International defines corrupt AI as “Abuse of AI systems by (entrusted) power holders for their private gain.” One concern is the targeted creation of “deep fakes” to skew public opinion about political rivals. A similar, but more insidious, risk is that AI systems might be allowed to develop extensive reach for flooding both minor and major distortions of fact into the public discourse.
The UK also declined to sign the Leaders Declaration, saying they “agreed with much of the leaders’ declaration and continue to work closely with our international partners,” but that “the declaration didn’t provide enough practical clarity on global governance, nor sufficiently address harder questions around national security and the challenge AI poses to it.”
China and India did sign, suggesting there is opportunity for an agreed global framework for preventing serious threats from AI. That hope, however, is called into question by the fact that China, Europe, and the United States all racing to achieve global dominance in the field. The current practice of allowing these systems to roll out as commercial services including for infiltration of high-stakes professional decision-making, remains a safety risk with no agreed response.
UPDATE—Feb 22, 2025
Trump/Musk AI actions raise concerns about rule of law
Legal analysts are reporting that actions taken by Donald Trump during the first weeks of his new administration have removed safeguards aimed at preventing harm and abuse, opting instead to promote “U.S. dominance”. The Charter agreed in Paris commits to such safeguards.
The effort by Elon Musk’s “DOGE” operation to replace human civil servants has also been flagged as a potential threat to Americans’ right to accountable government and to pursue redress for harms caused.
On the risks AI poses to personal freedom of choice & responsible decision-making
Geoversiv has long reported on the dynamics (and risks) of hyper-convergence—the activated interconnection of data systems with elements of our physical experience. In 2017, we examined how engagement with artificially intelligent machine-based services could threaten human free will and freedom of choice, noting:
“Data integration must empower people to make sovereign personal decisions, informed by reliable evidence. Free will is irrevocable. We are always engaging it, yet countless editorial decision points determine the list of options from which we will choose.”