AI is No Longer Optional, and That’s a Problem

Written by Hannes Cools

Artificial intelligence (AI) is becoming ubiquitous in our daily lives. Think of ‘smart’ text suggestions in your inbox, algorithms that screen résumés, or virtual assistants that suddenly pop up on your desktop. At the same time, AI is no longer optional. And that’s a problem — not because AI is inherently bad, but because we barely get the chance to say no.

Read the full Article

We are increasingly being ‘defined’ by our data. What we like, buy, search for, or even choose not to click on is recorded, analyzed, and used to build profiles about us. These profiles determine which advertisements we see and what content reaches us. Data are no longer just a byproduct of our online behavior; they have become the lens through which companies and algorithms view — and ultimately treat — us.

These data contain valuable information, something that (big tech) companies are obviously aware of. Last week, Meta — the parent company of Facebook, Instagram, and WhatsApp — announced that it will train its AI models using users’ public data. The same data are thus being deployed once again to “get to know” and “represent” us even better. Users are given the option to object, but the process is an opt-out, not an opt-in.

Current model of AI integration is problematic

The fact that AI is increasingly penetrating the fabric of our online lives isn’t necessarily a problem in itself. Technology is constantly evolving and can bring enormous benefits. However, we must acknowledge that the current model of AI integration is problematic, for at least two reasons.

First, decisions about deep AI integration are being made by major tech companies that prioritize economic interests over societal ones. As a result, the debate is often framed economically: AI as an engine of growth, as an efficiency booster, as a competitive advantage. This narrow economic perspective also influences how AI is developed and deployed. Take Amazon, for example, which uses AI to optimize all your purchases — from product recommendations to dynamic pricing and automated delivery. Meanwhile, fundamental societal values such as autonomy and diversity are being sidelined. For instance, we talk less about the democratic implications of AI systems, about control over data, or about who decides which values are embedded in such systems.

Second, hardly any alternatives are being offered. Those who don’t want to use AI are increasingly left without a real way out. Moreover, it’s becoming clear that opting out of AI means being unable to use essential tools at school, at work, or in government services. Even those who consciously try to escape AI-driven systems encounter applications where disabling AI is, at the very least, cumbersome. An illustrative example is Microsoft’s AI assistant Copilot. A few weeks ago, when opening a Word document, I suddenly received a notification that my documents would now be automatically summarized — without my prior consent. Microsoft had decided to be “helpful” without asking me.

Therefore, it’s high time to broaden the debate about the role of AI in our society, especially since there is noticeably little space for public interest in the current AI integration model. This requires a different approach, with concrete measures. True consent means that citizens understand what they are consenting to, what risks are involved, and how they can later revise or withdraw their consent. This implies not only more transparent user interfaces but also clear and nuanced communication about how and for what purposes AI systems use our data.

AI-literacy

Moreover, we must invest in digital and AI literacy. Those who do not understand how an AI system works will not be able to make conscious choices or recognize potential abuses. Citizens need to know what data are collected about them, how that data is processed, and how algorithmic systems affect their opportunities, rights, and freedoms. AI education should be part of general education, not just for young people, but also through adult education and retraining programs.

Finally, we must urgently broaden the AI debate to include the issue of ‘digital sovereignty’ — the idea that citizens, communities, and states must retain control over their digital infrastructure and information. In some European countries, like Belgium, this principle has barely taken root. Meanwhile, the Netherlands is actively investing in sovereignty, for instance by developing GPT-NL, a public language model centered around democratic values and transparency. It is therefore essential that as a society we actively reflect on what technology we deploy, in what ways, and under what conditions (which could also involve local players instead of big tech companies). This debate must not be confined to boardrooms or research institutes. It must be held in parliaments, schools, and living rooms.

The way we make choices about AI and other technologies touches on the core of our democracy: who decides how technology shapes our lives? And how do we ensure that citizens can actively and knowledgeably participate? Especially now that American tech companies are increasingly exerting political influence — think of how, under Trump’s wing, they advocate for absolute deregulation.

Today, citizens, companies, and governments rely on foreign, mainly American, technologies, making us dependent on commercial logics that undermine societal values.

The integration of AI doesn’t have to be a threat. But that requires us not to leave it to market principles alone. Technological progress should never come at the expense of democratic control or individual rights. As long as we have no real ability to influence those choices — or simply say “no” — AI is not a tool but merely a system imposing itself without invitation. AI may no longer be optional, but resistance remains possible by continuing to demand more say and greater public responsibility.