AI Factory France – Finance Vertical

Roundtable on the impact of AI on finance

Tuesday, 3 February 2026  (Palais Brongniart)

Jean-Jacques Pluchart

The Turgot Club attended the launch of the AI Factory Finance Vertical, created by the Louis Bachelier Institute, in collaboration with GENCI. ​​The workshop brought together researchers and  ​players from the financial industry  ​on the theme ​ of transforming risk management practices and the conditions necessary for the controlled and reliable use of AI and LLMs in an increasingly complex financial environment. ​​The round table was introduced by three presentations.

The challenges of the digitisation of finance

Marie Brière (DG of the ILB) pointed out that finance is one of the ​largest users ​of quantitative models, but that it remains largely absent from the major existing benchmarks. ​​The Stanford University AI Index lists, over several hundred pages, AI datasets and benchmarks, without devoting a specific section to financial services. ​​She therefore hopes for greater dialogue between academic researchers and practitioners on the benchmarks applicable in the various fields of finance. ​​The objectives of the AI factory’s Finance Vertical are to identify concrete use cases of AI in finance, to reflect on reference databases adapted to the problems of the sector, to propose relevant metrics to assess the reliability, robustness and added value of AI models in risk management, and finally to identify the services and resources necessary to advance applied research in AI for finance.

The contributions of AI in finance

 ​​Charles-Albert Lehalle (professor at the École Polytechnique) said that the challenge of searching for benchmarks was to reduce the time – and therefore the costs – of data processing, without reducing their reliability and efficiency. ​​He recommends better categorizing the models according to their objectives (measurement of volatility and/or liquidity), their functions (portfolio management, construction of structured products, optimization of the settlement of securities, etc.), and the nature of their risks (market, operational, etc.). ​​One of the interests of AI is to multiply the simulations of the movements and values of securities in the face of monetary and/or financial “shocks”. ​​AI also makes it possible to test the application to finance of models already proven in other disciplines (including physics).

AI biases in finance

Damien Challet (CentraleSupélec) stressed the importance of good questioning (through prompts) of generative AI and LLM applications, as their structures and processes are influenced by the languages (especially computer languages), training and cultures of their designers. ​​To deal with the same question, he advises consulting several sites and search engines, reformulating the prompts under different labels, writing them in explicit and non-implicit languages, and correcting any translation biases using specific models. ​​He points out that QLMs, or  ​​quantum versions of language models, which are still at the theoretical stage, could eventually change the way models are trained and biases are corrected.

The ​ round table: AI in market finance: use cases, benchmarks and metrics

The round table focused on how AI is transforming risk management practices and the conditions for the controlled and reliable use of AI models. ​​It brought together Laurent Carlier ​(Global Head of Data and AI Lab, BNP Paribas Global Markets), Gaetan Caillaut (AI Researcher at Dragon LLm), Pascal Oswald ​(Head of Market and Counterparty Risk Modelling, Natixis), Alain Durmus ​(Professor, École Polytechnique) and Romuald Élie ​(Research Scientist, Google DeepMind). ​​The debate was moderated by Marie Scheid (PhD Student, École Polytechnique).

The debaters distinguished between tools applied to routine work processes, data extraction and management, text transformation and generation, market risk simulation, algorithm coding and specific functions. ​​A significant part of due diligence is now entrusted to AI, which executes it faster, in a more structured way and under more efficient economic conditions, in particular thanks to advanced technologies, such as quantum computing.

In these different areas, the difficulty lies in the “change of scale” of both the mass of data processed, the power of the models and the complexity ​ of the calculations (increasingly multivariate and stochastic models). ​​Another difficulty concerns the optimization of the relationship between the efficiency of the model  ​and its processing cost. ​​Researchers are striving to build more or less open natural or synthetic databases (datasets), ​ and to test standard models ​ (benchmarks). ​​Benchmarks  ​​ provide a proprietary and structured framework that does not restrict creativity, but anchors it in financial reality. ​​Guided workflows (covering specific use cases) ensure that each uniquely generated clause maintains internal consistency, optimal readability and strict financial, legal and tax compliance. ​​It is necessary to rely on applications based on “agentic” approaches, specifically coded and trained for specific uses, and capable of exploiting advanced computing technologies. ​​These tools are tested before being validated by internal committees at banks and insurance companies, before being used by practitioners. ​​The collection and dissemination of data are also subject to various public regulations (such as the GDPR in Europe) and private codes*.

ChatGPT is a fundamental model, a Large Language Model, falling under generative AI. ​​It provides general answers, but does not reason like a financial specialist. ​​You need to have sufficient financial knowledge to know how to properly query a tool like ChatGPT.

More advanced technologies, and in particular agentic AI, mark an important evolution: they make it possible to move from simple content generation to more structured framing, decision-making and action capabilities. ​​These approaches make it possible to frame the fundamental models in order to make them more precise and more reliable in specific areas, but above all directly usable by people who do not have advanced financial knowledge.

Most of the speakers would like to see a return to more “frugality” in the volumes of data processed (structured and unstructured), in the power of the models and the duration of learning (in particular by reinforcement). ​​They want to pool benchmark research in the core functions of the finance professions. ​​They hope that progress will be made in the processing of languages (linguistics) and signs (cryptology) that underlie the incoming and outgoing data of models. ​​The debate also focused on the “hallucinations” (inconsistencies) of the results of certain processes, which result from an uncontrolled change of register in the calculations or the generation of text. ​​These changes are sometimes attributable to misinterpretations of tokens interpreted by the systems.

The debaters believe that banks are becoming “technology companies”, if we consider the growing proportion of IT specialists in their workforce, without this having called into question the role of financiers or decision-makers.

The debate continued with the answer to many more or less technical questions, which demonstrated that the integration of AI in finance is a vast multidimensional process of “creative destruction” (in the sense of Schumpeter) which is still only at an exploratory stage.

* see the article on AI and intellectual property law  ​​ ​​on clubturgot.com.