Designing AI for all: New framework puts inclusion and safety at core
The experts stressed that ethical AI must begin with inclusive datasets and diverse design teams. Practical recommendations included funding pathways for underrepresented communities to engage with AI development, community-led audits, and training datasets enriched with diverse sociocultural input. One participant illustrated this with a common failure: even when explicitly asked to generate a female CEO image, AI models routinely output an archetypical portrayal aligned with male-centric Western aesthetics.
A new study published in the journal AI warns that inclusive, ethical, and accessible AI development remains more aspiration than reality. The peer-reviewed study titled “Designing Artificial Intelligence: Exploring Inclusion, Diversity, Equity, Accessibility, and Safety in Human-Centric Emerging Technologies” proposes a bold redesign of the AI development lifecycle around five core principles: Inclusion, Diversity, Equity, Accessibility, and Safety, collectively called IDEAS.
The pilot study draws on interviews with twelve global experts in AI, spanning academia, industry, policy, and design. Their insights reveal that while AI holds transformative potential, its current development practices often reinforce bias, exclude vulnerable populations, and lack safety-by-design features.
How can AI be designed to foster inclusion and equity rather than reinforce inequality?
The study interrogates how AI might foster equity rather than exacerbate societal divides. The experts agreed: the potential for AI to democratize access to education, healthcare, and employment is undeniable. Yet, this promise is undercut by data-driven biases and homogenous design teams. Generative AI, for instance, often defaults to narrow, Westernized imagery, even when prompted for diverse representation, a failing attributed to biased training data and lack of contextual awareness.
The experts stressed that ethical AI must begin with inclusive datasets and diverse design teams. Practical recommendations included funding pathways for underrepresented communities to engage with AI development, community-led audits, and training datasets enriched with diverse sociocultural input. One participant illustrated this with a common failure: even when explicitly asked to generate a female CEO image, AI models routinely output an archetypical portrayal aligned with male-centric Western aesthetics.
Further, embedding equity from the ground up means engaging marginalized voices in AI governance and allowing co-creation. Several experts urged that AI development must mirror societal diversity not just in interface outcomes, but throughout design processes, data sourcing, and product deployment.
What risks do AI systems pose to marginalized communities, and how can they be mitigated?
The study addressed another key question surrounding the growing alarm over AI’s misuse and systemic risks, especially for vulnerable populations. Chief among the concerns: AI-powered misinformation, privacy violations, lack of transparency, and algorithmic discrimination. Experts pointed to examples such as AI being used to manipulate voter sentiment during election cycles through microtargeted media and content suggestion loops, a phenomenon echoing the Cambridge Analytica scandal.
Transparency and accountability surfaced as urgent ethical frontiers. While some AI developers aim for ethical use, interviewees noted a recurring failure to implement basic safety guardrails. Experts proposed stronger legislation, like the EU AI Act, and mandatory public reporting mechanisms. But several emphasized that regulation alone was insufficient. A recurring theme was the necessity for independent oversight and explainability-by-design so users can understand and challenge AI decisions.
A specific risk outlined was the commodification of mental privacy, particularly with neurotechnology and emotion-sensing AI. Experts warned that AI systems capable of interpreting emotional and cognitive data from facial expressions or online behavior pose serious risks of manipulation. The authors call for ethical frameworks that safeguard cognitive liberty and mental integrity.
The IDEAS framework suggests mandatory risk assessment protocols at individual, community, and societal levels. One proposed model, based on Stanford's James Landay’s hierarchy of safety, calls for aligning AI outcomes with real-world cultural norms, public values, and collective well-being.
How can AI be made accessible and meaningful for all populations?
Addressing the third core question, ensuring AI accessibility, the study found a significant gap in digital literacy and intergenerational knowledge. Participants noted that many older adults are excluded from AI benefits due to unfamiliarity and fear, while younger generations risk blind reliance without critical understanding. Literacy gaps, the authors argue, must be bridged through formal education and informal outreach.
Some proposed embedding AI education in school curricula starting from early childhood, delivered through play and social media platforms. Others suggested public awareness campaigns mediated by popular figures or comedians to demystify AI technology and reduce resistance. Accessibility was also discussed in terms of interface design, e.g., allowing for voice control, contrast settings, and multilingual support.
One promising avenue includes Retrieval-Augmented Generation (RAG) models trained on curated datasets from underrepresented communities to ensure their accurate representation in AI outputs. Other strategies include participatory co-design with disabled or neurodivergent users, and integration of the World Wide Web Consortium’s (W3C) accessibility guidelines.
But technical solutions alone are not enough. Equally important is the systemic promotion of IDEAS principles across governance, design education, and public policy. The report proposes “IDEAS audits” in educational institutions, welfare screening systems, and AI labs to evaluate whether tools meet inclusive benchmarks.
Moving from awareness to action
The IDEAS framework is grounded in qualitative insights and participatory design principles. Unlike existing frameworks like IEEE’s Ethically Aligned Design or AI4People’s Ethical Guidelines, IDEAS offers practical tools such as workflow scaffolding, heuristic prompts, and co-design methodologies that developers and policy designers can immediately apply.
Its strength lies in cross-disciplinary relevance, equally accessible to computer scientists, user experience designers, civil servants, and accessibility advocates. Empirical testing is already underway in three sectors: AI product development for neurodiverse users, digital accessibility audits in universities, and policy reviews for welfare AI systems.
Furthermore, the authors caution that this pilot study, based on a small sample size and focused predominantly on Western contexts, must be expanded. Future research will include stakeholders from underrepresented regions like Africa and Latin America and incorporate broader user categories, including non-technical and disabled participants.
- FIRST PUBLISHED IN:
- Devdiscourse

