Highlight Research Society
Colloque IA - AI Summit

One year after the AI Action Summit: What comes next for science?

At the Collège de France, researchers ask what kind of AI ecosystem research can sustain

One year after the AI Action Summit, the question is no longer whether artificial intelligence will transform science. That shift is visible across laboratories. The more immediate question, raised at the Collège de France during the conference “AI in Paris: One year after the Summit, building the future,” is structural: what kind of AI ecosystem are we building for research?

At the conference, researchers did not compete over model performance. They spoke about limits. EnergyProofAccessRegulation. The discussion, co-organized by Institut Polytechnique de Paris and Université PSL with HEC Paris, felt less like a celebration and more like a stocktaking exercise before the next global meeting “AI Impact Summit” in New Delhi.

If AI is becoming infrastructure for research, then its constraints become scientific questions in their own right.

The end of effortless scaling

For much of the past decade, AI progress has followed a simple formula: more data, more parameters, more compute. Performance improved as models scaled.

But in academic science, scale behaves differently. It is a cost center.

Vicky Kalogeiton, Professor of Artificial Intelligence at École polytechnique (IP Paris) and Hi! PARIS Chair, questioned the assumption that performance must follow exponential growth in parameters and compute. In many scientific settings, data are limited, noisy, or difficult to label. Budgets are finite. Environmental costs are measurable.

Her work on diffusion models illustrates another route. Instead of discarding imperfect image–text pairs, models can be trained to extract structure from noisy inputs. The result is not simply technical efficiency. It changes who can participate. If progress depends exclusively on access to massive infrastructure, research concentrates. If methodological efficiency counts as innovation, capability spreads more widely across laboratories.

The policy implications were not abstract. Philippe Baptiste, France’s Minister of Higher Education and Research, warned that Europe cannot mirror the most compute-intensive paths taken elsewhere without confronting energy realities. Data centers are not neutral infrastructure. They consume space, electricity, and political capital.

In this context, “frugality” is strategic.

Acceleration under supervision

In molecular simulation, the constraint is time. Modeling catalytic reactions or molecular dynamics requires navigating high-dimensional systems over long horizons. Machine learning offers acceleration: neural networks can approximate force fields, uncover latent variables, and improve sampling efficiency.

Tony Lelièvre, Professor of Applied Mathematics at ENPC (IP Paris), described this potential carefully. In his field, researchers often know the target distribution they should sample from. The scientific challenge is reaching it efficiently without drifting away from physical correctness.

This is where enthusiasm meets discipline. A faster simulation is valuable only if its error can be quantified. A generative model is useful only if its range of validity is understood. Scientific AI must satisfy standards of reproducibility and statistical robustness that go beyond performance on a benchmark.

In that sense, AI becomes an instrument inside scientific reasoning. Like any instrument, it requires calibration, testing, and critical interpretation.

Vicky Kalogeiton
Tony Lelievre

Vicky Kalogeiton (Ecole polytechnique – IP Paris) & Tony Lelièvre (ENPC – IP Paris) at the Collège de France during the conference “AI in Paris: One year after the Summit, building the future.”

When the data have physics

Earth observation offers another test case. Radar satellites generate complex-valued signals shaped by electromagnetic interactions and acquisition geometry. The data follow physical laws and contain characteristic noise patterns such as speckle.

Florence Tupin, Professor at Télécom Paris (IP Paris) emphasized that generic deep learning models often underperform when they ignore these structures. The most reliable systems incorporate domain knowledge directly into their architecture. Self-supervised approaches grounded in signal physics can extract meaningful representations without relying on extensive labeled datasets.

The lesson is consistent across disciplines. In science, performance and interpretability cannot be separated. Embedding physical constraints into models improves both reliability and credibility.

Governance as a scientific condition

The discussion in Paris moved beyond methods to governance.

AI for science requires infrastructure: computing clusters, shared datasets, regulatory clarity, and sustained public investment. Who controls these resources shapes who can conduct research.

Antonin Bergeaud, Professor at HEC Paris and Hi! PARIS Chair, argued that diffusion will depend on policy choices as much as technical advances. AI will involve significant public investment and will influence labor markets and productivity. Yet uncertainty remains high. On the future of white-collar work, he noted, the evidence is still incomplete.

Europe faces a visible trade-off. Strong commitments to environmental protection and data security define its identity, but they also impose economic constraints. The challenge is not whether to abandon these safeguards. It is how to design incentives that allow innovation and diffusion without eroding them. Regulation shapes firm size, investment decisions, and the direction of research. It is not external to technological progress; it steers it.

Another structural issue surfaced in quieter conversations: the imbalance between academia and industry. Many frontier developments now occur in private companies. Students move toward corporate laboratories. Knowledge does not always circulate back. If AI becomes embedded in the production of scientific knowledge, maintaining strong public research ecosystems becomes a strategic necessity.

Florence Tupin
Collège de France

Florence Tupin (Télécom Paris – IP Paris) at the Collège de France during the conference “AI in Paris: One year after the Summit, building the future.”

From enthusiasm to responsibility

What has changed in a year is not the capability of AI systems alone. It is the framing.

Energy consumption is no longer an external concern. It is a boundary condition.
Validation is no longer secondary. It is central.
Diffusion is not automatic. It depends on institutional design.

AI for science is moving from acceleration to accountability.

The next international summit in New Delhi will test whether this more constrained perspective becomes global consensus. But the signal from Paris was clear. The debate is no longer about whether AI can push scientific frontiers. It is about the conditions under which it should.

If artificial intelligence becomes part of the infrastructure of knowledge production, then its design, governance, and limits are themselves scientific questions.

What comes next for science is not predetermined by model size. It will depend on how rigorously institutions confront the constraints now in plain view, and on how deliberately they choose the path forward.