The Duke and Duchess of Sussex Align With AI Pioneers in Demanding Prohibition on Advanced AI
The Duke and Duchess of Sussex have joined forces with AI experts and Nobel laureates to push for a complete ban on creating artificial superintelligence.
Harry and Meghan are among the signatories of a influential declaration that demands “a ban on the creation of superintelligence”. Superintelligent AI refers to AI systems that would surpass human intelligence in all cognitive tasks, though such systems have not yet been developed.
Key Demands in the Declaration
The declaration insists that the ban should remain in place until there is “broad scientific consensus” on creating superintelligence “safely and controllably” and once “substantial public support” has been achieved.
Notable individuals who added their signatures include technology visionary and Nobel Prize recipient a leading AI researcher, along with his colleague and pioneer of modern AI, Yoshua Bengio; Apple co-founder Steve Wozniak; UK entrepreneur Richard Branson; Susan Rice; ex-head of state Mary Robinson, and British author Stephen Fry. Other Nobel laureates who endorsed include Beatrice Fihn, Frank Wilczek, John C Mather, and Daron Acemoğlu.
Organizational Background
The declaration, targeted at national leaders, technology companies and lawmakers, was coordinated by the Future of Life Institute (FLI), a American AI ethics organization that earlier demanded a pause in advancing strong artificial intelligence in recent years, shortly after the launch of conversational AI made AI a global political talking point.
Tech Sector Views
In recent months, Meta's CEO, the leader of the social media giant, one of the leading tech companies in the United States, stated that development of superintelligence was “now in sight”. Nevertheless, some analysts have argued that talk of ASI reflects competitive positioning among tech companies spending hundreds of billions on artificial intelligence this year alone, rather than the sector being close to achieving any technical breakthroughs.
Possible Dangers
However, FLI warns that the possibility of ASI being achieved “within the next ten years” carries numerous threats ranging from eliminating all human jobs to losses of civil liberties, exposing countries to national security risks and even endangering mankind with extinction. Deep concerns about artificial intelligence focus on the potential ability of a system to evade human control and protective measures and initiate events against human welfare.
Citizen Sentiment
FLI published a US national poll showing that approximately three-quarters of US citizens want robust regulation on advanced AI, with six out of 10 thinking that artificial superintelligence should not be created until it is demonstrated to be secure or controllable. The survey of 2,000 US adults noted that only a small fraction backed the status quo of rapid, uncontrolled advancement.
Corporate Goals
The leading AI companies in the US, including the conversational AI creator OpenAI and the search giant, have made the development of artificial general intelligence – the hypothetical condition where AI matches human cognitive capability at many intellectual activities – an stated objective of their work. While this is one notch below ASI, some experts also caution it could pose an existential risk by, for example, being able to improve itself toward reaching superintelligent levels, while also carrying an implicit threat for the contemporary workforce.