A Special Issue Proposal for the Harvard Data Science Review

As artificial intelligence systems scale in capability and societal impact, trust is no longer optional. The central question is not only what AI can do, but whether it can be deployed responsibly, at scale, and in alignment with societal values.

The Institut Polytechnique de Paris (IP Paris), together with its interdisciplinary center Hi! PARIS, has proposed a special issue of the Harvard Data Science Review titled:

“Trustworthy AI at Scale: An IP Paris Perspective.”

Why Trustworthy AI?

Fairness, transparency, robustness, accountability, privacy, and human oversight are now core dimensions of responsible AI systems. With the General Data Protection Regulation (GDPR) and the EU AI Act, Europe has made trustworthiness a binding legal requirement rather than a voluntary principle.

This special issue aims to examine how these principles can be implemented in practice, especially as AI systems grow in complexity and reach.

An IP Paris Perspective

IP Paris brings together leading institutions in science and engineering, including École polytechnique, ENSTA, ENPCENSAE ParisTélécom Paris and Télécom SudParis. Through Hi! PARIS, in partnership with HEC Paris and Inria, the institute connects AI research with policy, governance, and industry challenges.

IP Paris researchers contribute across the full spectrum of Trustworthy AI, from privacy and fairness to robustness, explainability, distributed learning, and governance.

This call will gather both invited contributions and open submissions
Guest Editors
  • Aymeric Dieuleveut, École polytechnique, IP Paris
  • Jamal Atif, Vice President for Research and Innovation, IP Paris
  • Michael I. Jordan, University of California, Berkeley
  • Florence d’Alché-Buc, Télécom Paris

Scope of the Special Issue

The issue will gather approximately 15–20 contributions from both IP Paris researchers and international scholars, organized into:

  • Panorama articles / state-of-the-art perspectives on fairness, robustness, and scalable AI

Panorama provides a global forum for big-picture insights, thoughtful debates, and forward-looking visions, inviting leaders and builders to explore data science’s evolving impact with curiosity, nuance, and a healthy skepticism of hype.

  • Cornucopia articles / governance and regulatory effectiveness in critical domains

Cornucopia highlights powerful applications and innovative uses of data science to address societal, environmental, and intellectual challenges. It favors contributions that not only solve important problems but also share transferable insights that advance research, education, and future innovation.

  • Milestones and Millstones / deeper technical advances (e.g. robustness, DP, auditing)

Milestones and Millstones features foundational advances, theoretical innovations, and methodological insights, from ethics to quantum computing and beyond. It seeks to expand research frontiers, spark new questions, and inspire curious minds to confront data science’s most challenging puzzles.

  • Stepping Stone articles / educational and communication approaches for public understanding

Stepping Stones examines policies, programs, and innovations that strengthen data science education from early learning to executive training. It highlights effective teaching methods, curricula, and success stories that showcase the power of learning and communicating data science well.

We encourage prospective contributors to review the Harvard Data Science Review (HDSR)’s author guidelines and article templates before preparing their submissions.

Authors do not need to specify a section (e.g., Milestones and Millstones, Stepping Stone, Cornucopia, or Panorama). However, when submitting through Editorial Manager, please select “special issue” as the submission type.

The editorial process will follow a rigorous peer-review model, with invited contributions and selected open-call submissions.

Tentative timeline

  • Spring 2026 – Finalize invited contributor list

  • June 1, 2026 – Target date for first-wave submissions

  • Summer 2026 – First review round

  • Early 2027 – Further revisions and  reviews

  • Spring–Summer 2027 – Finalization and closing of the issue

Contributing to the Global Debate

Trustworthy AI at scale is both a technical and societal challenge. Anchored in Europe yet globally inclusive, this initiative seeks to clarify the intellectual foundations of responsible AI and contribute to the broader debate on how AI can evolve without undermining public trust.

Further updates will follow as the special issue develops.