3 hours ago

No, the human-robot singularity isn’t here. But we must take action to govern AI | Samuel Woolley

On a recent trip to the San Francisco Bay Area, I was shocked by the billboards that lined the freeway outside of the airport. “The singularity is here,” proclaimed one. “Humanity had a good run,” said another. It seemed like every other sign along the road was plastered with claims from tech firms making outrageous claims about artificial intelligence. The ads, of course, were rife with hype and ragebait. But the claims they contain aren’t occurring in a vacuum. The OpenAI CEO, Sam Altman, recently said: “We basically have built AGI, or very close to it,” before confusingly qualifying his statement as “spiritual”. Elon Musk has gone even further, claiming: “We have entered the singularity.”

Enter Moltbook, the social media site built for AI agents. A place where bots can talk to other bots, in other words. A spate of doom-laden news articles and op-eds followed its launch. The authors fretted about the fact that the bots were talking about religion, claiming to have secretly spent their human builders’ money, and even plotting the overthrow of humanity. Many pieces contained suggestions eerily like those on the billboards in San Francisco: that machines are now not only as smart as humans (a theory known as artificial general intelligence) but that they are moving beyond us (a sci-fi concept known as the singularity).

Based upon my years of research on bots, AI and computational propaganda, I can tell you two things with near certainty. First, Moltbook is nothing new. Humans have built bots that can talk to one another – and to humans – for decades. They’ve been designed to make outlandish, even frightening, claims throughout this time. Second, the singularity is not here. Nor is AGI. According to most researchers, neither is remotely close. AI’s advancement is limited by a number of very tangible factors: mathematics, data access and business costs among them. Claims that AGI or the singularity have arrived are not grounded in empirical research or science.

But as tech companies breathlessly promote their AI capabilities another thing is also clear: big tech is now far from being the countervailing force it was during the first Trump administration. The overblown claims emanating from Silicon Valley about AI have become intertwined with the nationalism of the US government as the two work together in a bid to “win” the AI race. Meanwhile, ICE is paying Palantir $30m to provide AI-enabled software that may be used for government surveillance. Musk and other tech executives continue to champion far-right causes. Google and Apple also removed apps people were using to track ICE from their digital storefronts after political pressure.

Even if we don’t yet have to worry about the singularity, we do need to fight back against this marriage of convenience caused by big tech’s quest for higher valuations and Washington’s desire for control. When tech and politicians are in lockstep, constituents will need to use their power to decide what will happen with AI.

Many people understandably believe that socially beneficial technology regulation is not possible in the current political climate. Luckily, governmental and corporate policy are not the only way to combat the challenges and uncertainties presented by AI. The recent protests in Minneapolis have reminded us about the power we have as a collective, even one loosely organized. Minnesotans’ display of strength has caused the Trump administration and the corporations supporting it to retreat. In the past, public pressure has caused big tech to make changes related to users’ privacy, safety and well-being.

The recent protests, and subsequent retreat by powerful organizations, demonstrate that the powerful run things at the sufferance of the people. This is true about politicians and it’s also true about business leaders. AI is not a runaway force in the hands of those at the top, but, as two Princeton scientists put it, a “normal technology”. Its effects upon the world will be decided by people. We have the capacity to allow its impact to accelerate, but we also have the ability to control and regulate its use. As the Anthropic CEO, Dario Amodei, recently argued, AI can and should be governed. The risk AI poses to society, not least in perpetuating growing inequality and informational slop, are real but manageable challenges.

This is not to say that AI – particularly generative AI and large language models (LLMs) – are not already changing how we communicate and even how we conduct other aspects of daily life. Yet Moltbook, and the AI agents that populate it, are not a demonstration of scientific benchmarks of intelligence. A reporter who recently “infiltrated” the bots only platform found as much, describing it as “a crude rehashing of sci-fi fantasies”. Others have noted similar mundanities about the site–that many of its posts actually seem to come from humans and, more importantly, that the bot-generated posts are simply “channeling human culture and stories”. They spout nonsense about religion and bogusly herald the age of superintelligent machines because that’s how humans often talk about robots and digital technology.

These so-called “agents” don’t have agency in the way people do, and they aren’t intelligent in the way people are. In fact, they are mostly reflections of people. Like the social bots that came before them, they are encoded with human ideas and biases because they are trained on human data and designed by human engineers. Many of them also operate via mundane automation, not actual AI (a term that continues to be rigorously disputed and debated by scientists).

People have managed changes spurred by new technologies many times before, and we can do it again. Again, Anthropic’s Amodei presents an alternative view to many of his cohorts: AI governance must be focused and informed. It does not have to be antithetical to reasonable technical progress or democratic rights. We must demand that AI be effectively governed and we must do so soon. AI is causing change and politicians are creating chaos, but the power to decide the future still lies in the hands of humans.

Read Entire Article

Comments

News Networks