By GrantXpert on behalf of ELOQUENCE Project
Artificial intelligence is reshaping every corner of modern life — but the systems driving this transformation were built without the input of millions of people they now affect. That gap has consequences. And closing it starts with your voice.
Artificial intelligence is no longer a promise on the horizon. It is already embedded in the systems that govern access to healthcare, shape hiring decisions, determine credit scores and personalise the information we see online. For most people, these processes are invisible — and that invisibility is precisely where the problem begins.
AI systems learn from data. And data, however vast, is never neutral. It reflects the world that produced it: its histories, its hierarchies, its blind spots. When the communities most affected by AI are absent from the data used to train it — or from the rooms where it is designed — the resulting systems tend to replicate and sometimes amplify the very inequalities they might otherwise help to address.
"AI is only as fair and inclusive as the processes behind it. The technology itself is not the problem. The problem is who gets left out of shaping it."
The promise and the gap
The potential of AI is real and significant. In healthcare, it supports earlier and more accurate diagnoses. In education, it enables personalised learning at scale. In public services, it can reduce bureaucratic barriers and improve access. At its best, AI can help build a more efficient, responsive and equitable society.
But this outcome is not automatic. AI systems designed without diverse input risk misunderstanding or misrepresenting the needs of communities they were never properly trained to serve. A facial recognition tool that performs poorly on darker skin tones. A recruitment algorithm that favours candidates whose profiles reflect historical hiring patterns. A healthcare model trained almost entirely on data from one demographic. These are not hypothetical concerns — they are documented realities.
Key risks when AI overlooks real people:
- Bias embedded in training data can skew decisions on hiring, lending, healthcare and law enforcement
- Underrepresentation in datasets leads to systems that misinterpret or exclude entire communities
- Inaccessible design creates barriers for users with disabilities or limited digital literacy
- Opaque, automated decision-making erodes public trust and limits meaningful oversight
Who is most at risk
Certain communities are both the most underrepresented in AI development and the most exposed to its potential harms. Their perspectives are not optional additions to the conversation. They are essential to getting AI right.
These include women, people of African descent, people aged 65 and over, people with disabilities, members of the LGBTQI+ community, and non-nationals living in a country other than their own. Each of these groups interacts with AI-driven systems in distinct ways, and each has insights that current development processes routinely fail to capture.
Older adults, for instance, bring important questions about privacy, autonomy and digital exclusion. People with disabilities can identify accessibility gaps that developers overlook. Migrant communities experience the consequences of algorithmic decision-making in immigration and housing systems in ways that are rarely reflected in AI research.
The ELOQUENCE project: research with a purpose
The ELOQUENCE project is a large-scale European research initiative focused on promoting ethical, inclusive and human-centred AI. Funded under Horizon Europe, it brings together researchers, organisations and communities across the continent with a shared goal: ensuring that the development of AI reflects the full diversity of the people it serves.
GrantXpert contributes to the project by leading the survey implementation and wide engagement of different communities, working to ensure that the perspectives of underrepresented groups are not just acknowledged, but actively incorporated into the research process.
At the heart of this work is a fundamental conviction: inclusion in AI cannot be achieved by talking about communities. It requires listening to them.
Your voice, your future
To support this, ELOQUENCE is conducting a short, anonymous survey open to people across Europe. It asks how individuals perceive AI, what role it plays in their daily lives, what concerns they hold, and what they expect from the systems being built right now. The responses will be used to directly inform policymakers, developers and institutions working on the next generation of AI tools.
This is not a research exercise disconnected from real outcomes. The evidence gathered through this survey will shape how AI is designed, governed and deployed across the EU. Every response strengthens the data. Every perspective that is captured is one that might otherwise be missing from decisions that affect millions of lives.
Take part · ELOQUENCE Survey 2026
Your perspective can shape the future of AI in Europe:
https://ec.europa.eu/eusurvey/runner/ELOQUENCESurvey2025
Anonymous · Multilingual · Just a few minutes
If you found this article useful, please share it with colleagues, friends, community groups or on social media. Reaching the communities whose voices are most needed requires all of us. The more people who take part, the stronger and more representative the research becomes.
AI will continue to grow more powerful and more present in everyday life. The question is not whether it will shape our future, but whether that future will be built for everyone, or only for some. Answering that question starts here.





