Jean-Jacques Pluchart In January 2026, the FNEGE (National Foundation for Business Management Education) published a study on Practitioners Becoming Researchers (PDC), aimed at “understanding their backgrounds and motivations, supporting future candidates in this transition, and highlighting the specific strengths of these profiles for teaching and research in management sciences “.The study uses a methodology based on a survey carried out between January and March 2025, which collected 207 responses, including 124 teacher-researchers who stated that they had been working for between five and twenty years. The survey reveals that a quarter of respondents work in public institutions, half in private schools and the last quarter abroad. The study distinguishes three profiles of PDCs: “meaning-oriented” (of existence), “intellectual challenge” (faced with the theoretical corpus), and “balanced” (between professional and private lives). The PDCs underline the scale of the challenges to be met by the PDCs: research training, assimilation of scientific literature, drop in financial income, limitation of career prospects, acculturation to the academic environment (often distant), confrontation with a certain psychological distress (due to uprooting), etc. However, they recognize that they have assets favorable to their transition: field experience, mobilization of professional networks for research, better understanding of managerial issues and challenges, less rigid hierarchy, legitimacy with students, etc. They particularly appreciate being able to carry out parallel associative, elective, sports or consulting activities. They acknowledge having underestimated the effort of retraining and the difficulty of accessing the status of university professor. Overall, they do not regret their choice of radical retraining. The authors of the study recommend that PDCs “adopt a learning posture, accept the status of novice, develop a critical distance from their previous practice, set up companionship systems promoting the transmission of codes, methods and postures of research, create places of collective reflexivity (seminars, peer groups) mobilizing the diversity of courses, organize webinars informing practitioners about the reality of the profession of EDC, and promote projects enhancing the professional expertise of PDCs”. The study shows that the multiplication of PDCs contributes to a better synergy between theorists and practitioners, between the university and the company, between fundamental research and applied research. Reference: https://us02web.zoom.us/webinar/register/wn_57sf8hq_rdol5jh987mkxq
AMERICAN STABLECOIN VERSUS DIGITAL EURO
Jean-Jacques Pluchart The digitization of currencies is currently marked by a rivalry between two models, that of stablecoins backed by the US dollar and that of the digital euro representing the Central Bank Digital Currency of the eurozone. This rivalry reflects the opposition between two fundamental systems of the economy, one being mainly regulated in the short term by the market and the other being governed in the long term by the State. It therefore reflects a new “tragedy of horizons”, according to Mark Carney’s famous formula delivered in 2015. The digital euro system, approved by the European Council on 19 February 2025, should be in place by 2028-2029, following an institutional timetable defined by the European Central Bank, while the stablecoin system based on the dollar is already in full development. Monetary stablecoins strive to combine the flexibility of bitcoins and the stability of fiat currencies issued by central banks. They are in principle guaranteed by solid assets such as US Treasury bills and Fed reserves, and their transactions are secured by the blockchain. Their assets are liquid and are not remunerated so as not to compete with banking products. In principle, they only perform the functions of payment (in particular cross-border) and “bridge currency” between currencies. In countries with unstable economies, they can act as an informal store of value. However, they do not fully comply with certain criteria required of systemic currencies. Unity between issuers is not guaranteed in the event of a crisis, and competition and therefore a hierarchy can then be established between them. The elasticity of stablecoins may be affected in the event of a monetary shock, as the system is not guaranteed by a central bank. In the absence of a lender of last resort, cryptocurrencies are suspected of being procyclical. Due to the decentralization of their issuers and the differences between their national regulations, the integrity of the stablecoin system and the protection of their users may be threatened. The market for monetary stablecoins also has an almost monopolistic structure, being largely dominated by the issuers Theter and Circle. Their consolidated outstanding amount is estimated at $350 billion as of 31 December 2025. But due to the uncertainties weighing on their future, the capitalization of stablecoins backed by the US currency is estimated at between $500 and… $3,700 billion by 2030. Other types of stablecoins that can be backed by baskets of cryptocurrencies, indexed to metal indices (such as gold) or commodities, are generally unsecured and are marked by even more erratic fluctuations. According to some economists, especially Anglo-Saxon ones, an inactive and unpaid MNBC cannot replace the large-scale adoption of stablecoins distributed by platforms. They consider that the risk of a threat by stablecoins to monetary sovereignty can only arise in payment networks, because the value of a national or federal currency is stabilised by the balance sheet of the Central Bank, and the convertibility between deposits is guaranteed by banking regulations. According to most monetary economists, the use of dollar-backed stablecoins is still limited (at 11% of global monetary assets at the end of 2025), because they are directly competed with by bank savings products, and above all, because they can indirectly suffer from the erosion of confidence in major currencies such as the US dollar. Monetary stablecoins therefore risk suffering the fate of certain private currencies that have now disappeared, which have marked world monetary history.
A new consumer credit reform to combat over-indebtedness
Nadia Antonin The Banque de France is sounding the alarm: according to the latest monthly barometer of financial inclusion published on 11 December 2025, the number of over-indebtedness files filed with the Banque de France increased by 8.9% over the first eleven months of 2025 compared to the same period in 2024. This increase intensified in November 2025, with an increase of 12.8% compared to November 2024. In addition, according to the Banque de France’s 2024 typological survey published in February 2025, nearly one in two debt files (43%) includes at least one consumer loan. The rise of consumer credit In view of technological developments and changes in consumer habits, consumer credit has experienced strong development in recent years. It is an essential lever for French households. It allows them to finance personal projects such as a car, a trip, household appliances, or to overcome cash flow difficulties. In addition, it represents a strategic activity for banking and financial organisations. Consumer credit is credit granted to individuals for the purchase of a manufactured object, as opposed to production credit granted to the manufacturer. It concerns transactions other than those related to real estate. Its amount is between €200 and €75,000 and its repayment period is greater than three months.It is governed by the Consumer Code, which lays down a set of rules relating to the content and conclusion of the contract. There are several forms of consumer credit: – the personal loan which makes it possible to finance a project without having to justify the use of the funds; – the assigned credit linked to the purchase of a specific good or service; – revolving credit consisting of a credit line, accessible at any time and which is renewed over the course of repayments; – the lease with purchase option which makes it possible to rent a property (for non-professional use) with the option of buying it at the end of the contract at a price fixed from the start; – personal microcredit, which concerns people who do not have access to traditional bank loans, due to low income or a situation of financial precariousness. Consumer credit and over-indebtedness Appearing to be attractive, consumer credit can quickly become a financial trap if you do not understand how it works. It is considered to favour over-indebtedness, that is to say a situation in which a person (or a household) is no longer able to honour their debts (monthly loan payments, bank overdrafts, rents, etc.). Over-indebtedness developed in the late 1980s following the lifting of credit controls in a context of strong growth in consumer credit. Despite the passing of the Neiertz law in December 1989 to combat over-indebtedness, the number of filings has continued to grow. Faced with the growing flow of cases, the government adopted the Lagarde law on 1 July 2010, which imposes strict regulations on consumer credit for amounts between €200 and €75,000 and with a minimum duration of 3 months. To circumvent the Lagarde Law, two new types of loans, not subject to the provisions of the Consumer Code, have appeared: the “split payment”, which consists of paying for a purchase in 3 or 4 instalments, and the “mini-credit”, which is a loan of a few hundred euros. These rapid payment facilities, often perceived as harmless, have led more and more French people into over-indebtedness. In 2024, 17% of over-indebtedness files contained a mini-credit or a split payment, compared to only 1% in 2022 and 7% in 2023 (Source: Observatory of Banking Inclusion). This sharp increase in over-indebtedness prompted the French government to transpose, by order of 3 September 2025, the European Directive (EU) 2023/2025 of 18 October 2023 on consumer credit agreements. Tightening of conditions for access to consumer credit This new reform, which is due to come into force on 20 November 2026, aims to more strictly regulate access to credit and strengthen the protection of borrowers. The new provisions are as follows: – an extension of the scope of consumer credit regulations: until now, only transactions between €200 and €75,000 were regulated. The new rules will apply from the first euro up to €100,000, and will include split payment, mini-credit, bank overdraft and lease with option to purchase within the scope; – a careful study of the borrower’s solvency and a consultation of the Banque de France’s Personal Loan Repayment Incidents File (FICP) before any signing of the contract; – highly regulated advertising. This must be “clear, fair and not misleading”; – a strengthening of information obligations before signing the contract. The standardised preliminary form for the issuance of the offer (FIPEN) given to the borrower will have to describe precisely the characteristics of the contract; – support for the borrower experiencing financial difficulties. The lender will have to support them by offering solutions around renegotiation, rescheduling, spreading, extending the duration of the loan, etc., and directing them towards advisory services; – a new sanctioning power: control and sanctions will now be entrusted to the Directorate General for Competition, Consumer Affairs and Fraud Control (DGCCRF). This new reform of consumer credit to combat over-indebtedness has thoroughly modernised the provisions applicable in France in order to protect borrowers and harmonise the rules at European level. It introduces a requirement for transparency, ethics and accountability.
THE RETURN OF THE DRAGHI REPORT
Jean-Jacques Pluchart The end of 2025 was marked by a reminder of the concrete proposals formulated in the report led by Mario Draghi and published on 9 September 2024. This reminder is explained in particular by the implementation by the United States of an increasingly isolationist policy, which forces the European Union to finally define an industrial strategy that is both more proactive and more concerted. The debates raised on this issue in the European and national parliaments have shown – beyond certain ideological oppositions – the relevance of the recommendations made in the “Draghi report”, in order to redress the economy of the European Union and reassure its population about its future. The report notes the growing backwardness of the Old Continent’s technology and economy since the turn of the century. The delay is considered increasingly irrefutable and irrecoverable, especially in the high-tech sectors, and especially in the field of Artificial Intelligence. According to the report, the investment required to make up for this shortfall would be around 800 billion euros per year over several years, or 4.5% of the GDP of the 27 Member States, in the form of both public and private expenditure. This amount corresponds to that of the Rearm Europe plan intended to strengthen European defence. The technological backwardness is attributable to insufficient growth, driven by a downward trend in the productivity of human and material factors. This drop-out is said to be caused by both scientific and industrial factors, such as insufficient protection of emerging industries, inadequate regulation of high-tech markets (too focused on competition), a lack of investment in the most productive sectors, a delay in the decarbonisation of factories, and above all, an impoverishment of technical skills due to the failures of R&D (in particular public) and the education system in engineering sciences. But the delay would also be due to economic and financial shortcomings, which would imply a revival of the construction of the European capital market, the establishment of a single regulation, the commitment of a European public debt dedicated to productive investments, the development of venture capital, pension funds, securitization of receivables and non-bank financing (shadow banking). Another priority, according to the report, lies in the diversification and security of the European Union’s supplies of critical resources (rare metals, electronic chips, software, active ingredients, etc.), as well as in the strengthening of industrial value chains. However, the report recognizes that Europe has certain advantages in terms of electric mobility, micro-nuclear power, hydrogen exploitation, aeronautical construction, etc. Finally, it underlines the interest of reviving a certain economic patriotism [1] on a European scale, which presupposes the adoption of a form of protectionism through the strengthening of regulatory barriers and ecological standards penalizing certain imported products. The reference to the Draghi report in public debates highlights the opposition between political circles, which are divided on the policies to be implemented to bridge the European gap, and economists, who are generally in favour of a return to a more dirigiste and protectionist form of the market economy. It also pits defenders of the public service against supporters of the private economy, as well as players in the high-tech sectors against those in other sectors of activity. But this reference reveals above all oppositions between the 27 Member States of the Union, some of which consider themselves victims of a ‘prisoner’s dilemma ‘. 1 L’avenir de la compétitivité européenne. Une startégie de compétitivité pour l’Europe, Mario Draghi (dir.) , 2024 2. Cf. Does economic patriotism make sense today ?, C. de Boissieu, D Chesneau (col.), Maxima, 2020
AI at the service of the industry of the future
Jean-Jacques Pluchart The digital transformation of the industry involves mobilizing innovative artificial intelligence techniques. These techniques use the Internet of Things (IoT), cloud computing, data exchange, prescriptive analysis, new business models, etc. They apply advanced methods to manage data flows from heterogeneous systems. They aimto achieve greater energy efficiency, more efficient maintenance, protection against breakdowns or intrusions, etc. In principle, they enable better safety, productivity, quality and profitability of industrial systems. They operate on three levels: that of capturing operational data from suppliers and customers; that of connecting stakeholders; that of transforming data into decision-making aids and valuable actions. AI offers continuous analysis capabilities dedicated to the collection of sensory data, fault diagnosis, flow modelling and the prescription of valid solutions. Current research focuses in particular on the integration of critical maintenance, safety and cybersecurity processes. They strive to improve the performance of systems without compromising their security. Systems engineers must choose the most suitable learning, optimization or prediction methods for the machines’ fields of application. This is particularly the case in the electrical energy sector. The digital management of the processes of generation, transport, distribution and consumption of energy resources helps to reduce the mechanical inertia of the electricity network and to better ensure the balance of power between production and consumption. AI makes it possible to capture, store and process an increasingly large mass of data in order to make “the network smarter”. In the nuclear industry, AI makes it possible to improve predictive maintenance (by means of vibration sensors, real-time alerts), anti-collision detection and monitoring of sensitive sites. Among the digital techniques implemented in all industrial sectors, that of digital twins is emerging as a major lever for operational optimization. The digital twin is an interconnected system, powered by data from IoT systems, supervision platforms and simulation software. By building a virtual model of real objects, this technique offers companies increased visibility into their processes, better predictive maintenance and faster development of new products, without impacting production. However, it creates cybersecurity problems, as it reveals the “trade secrets” and “industrial comparative advantages” of innovative companies. It exposes them to espionage, sabotage, manipulation of optimization parameters and/or destruction of critical data. The complexity of digital twins makes them difficult to secure, as they combine heterogeneous software from a variety of vendors, integrating different IoT sensors, AI layers, physical simulators, edge tools and, above all, cloud computing. In the current context of software between advanced industrial states, these actions constitute major threats to their strategic resources. Thus, the digitization of industrial processes raises questions of national sovereignty that invite public and private decision-makers to extend the European directives on IT security, and in particular, and to adapt the personal data protection regulation (GDPR) to the industrial environment 4.0. In 2016, the Turgot club chronicled one of the first works devoted to the birth of “Industry 4.0 “. Kohler D., Weisz J-D. (2016), Ambition industrie 4.0. The challenges of the digital transformation of the German industrial model, Eds Eyrolles. Since the 1990s, German industry has been engaged in a “cobotics” or collaborative robotics approach combining robotics, mechanics, electronics and cognitive sciences to assist the operator of a machine. Since the 2000s, it has also initiated a process of “globotics” or globalization of resources thanks to AI. The latter makes it possible to shorten value creation chains and decision-making circuits within organizations and their ecosystems, but it also accelerates the phenomenon of job relocation in laboratories, offshore factories or call centres. It also promotes the emergence of new forms of open organizational innovation based on free software, co-working and distance working, in principle more agile and less expensive, which extend from research and development (living labs, fablabs, etc.) to cooperative production (digital micro-manufacturing, do-it-yourself, maker spaces, etc.), and collaborative consumption (peer-to-peer accommodation, car sharing, etc.).
From symbolic AI to connectionist AI
From symbolic AI to connectionist AI Jean – Jacques Pluchart The history of AI is marked by a tension between two approaches, alternately symbolic and connectionist, as observed by Cardon, Cointet and Mazières (2018). The researchers, following Lecun (2015), relaunched AI by processing massive data using so-called “deep neural” models (deep learning) and following a logic borrowed from cybernetics. This approach, described as “generative”, “inductive” or “connectionist”, has long been marginalised after the launch of symbolic AI in 1956 at Dartmouth by John McCarthy and Marvin Minsky, followed by the development of expert systems before the emergence of machine learning in the 1980s. Symbolic models were developed by a limited number of heavy league researchers, composed of a group of MIT (Minsky, Papert), Carnegie Mellon (Simon, Newell) and Stanford University (McCarthy), which mainly responded to public tenders and engaged in more or less playful experiments: chess or go games, dynamic simplified spaces, simulation of sets, semantic networks, truth functions, robotisation of behaviours, creation of new languages, etc. While symbolic AI applies a model to data following a hypothetical-deductive reasoning, connectionist AI follows an inductive logic by applying a learning method that makes it possible to make predictions by iteration of massive data. While symbolic AI applies a model (a theory or a heuristic) to structured data in order to verify a result at a given horizon, connectionist AI produces original content by “learning data” through appropriate questioning. While symbolic AI attempts to solve a predefined problem, connectionist AI induces meaningful representations from the interactions between social actors. This approach follows the logic of cybernetics initiated in 1948 by Norbert Wiener. The renaissance of connectionist AI is attributed in particular to the Parallel Distributed Processing research group led by Rumelhart et al. (1986). The work of the PDP explores the deep mechanisms of knowledge by exploiting the metaphor of neurons (a network of connections) and assuming that it is constructed by a binary activation mechanism. For more than 60 years, this controversy between researchers on AI has given rise to countless scientific works since, according to Cardon, Cointet and Mazières (2018), the “symbolic” corpus totalled 65,522 publications between 1956 and 2018, while the “connectionist” corpus gathered 106,278 publications. This vast debate is part of a process of scientific construction and deconstruction theorised in particular by Latour (1988). Références CARDON D, COINTET J-P. et Mazières A. (2018), « La revanche des neurones. L’invention des machines inductives et la controverse de l’intelligence artificielle », Réseaux 2018/5 (n° 211). LATOUR B. (1988) , Science in Action: How to Follow Scientists and Engineers Through Society , Harvard University Press. LECUN Y., BENGIO Y., HINTON G. (2015), « Deep learning », Nature, vol. 521, n° 7553. RUMELHART D. E., McCLELLAND J. L. (1986), « PDP Models and General Issues in Cognitive Science », in PDP RESEARCH GROUP (1986), Parallel Distributed Processing. Explorations in the Microstructure of Cognition, Cambridge MA, MIT Press. WIENER N. (2014), La cybernétique : Information et régulation dans le vivant et la machine, Seuil
AI and intellectual property law
The law firm Debouzy organised a series of conferences on the challenges of generative AI, aimed at corporate lawyers and managers. On 18 November 2025, the Turgot club was invited to the conference led by lawyers Desrousseaux and Pérot, specialists in patent and copyright law. The two lawyers argue that the legal problems differ according to the phases of the process of value creation by AI: the collection of data by AI agents, the learning by software (in the form of texts, images, voices or sounds, videos, codes) and the exploitation of applications thanks to user questions (prompts). The EU General Data Protection Regulation (GDPR), enacted in 2016, governs how the personal data of natural persons can be processed and transferred in Europe. It was reinforced in 2024 by certain provisions of the European AI Act. In the United States, data is protected in particular by the Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism Act (USA Patriot Act). In principle, the data is under opt-out and is not subject to opt-in (protected data that cannot be used). They can then in principle be used in the learning phase of the application, but in the operating phase, they cannot be reproduced in full in response to user prompts. They can only be partially transformed or reproduced. Digital markings of creations or inventions make it possible to better identify the data processed. At the stage of exploitation of the results of the application, it is difficult to measure their degree of creation or invention. The two lawyers note an increase in litigation for infringement as well as negotiations or transactions for the sharing of the value created by AI from patented inventions or protected intellectual creations. The trend would be towards negotiation rather than litigation because infringement is difficult to prove. Overall, lawyers believe that current patent, copyright and professional secrecy regulations are sufficient to protect inventors and creators. The increase in the number of cases decided and cases of use should be sufficient to establish jurisprudence and stabilise practices. Jean-Jacques Pluchart
The Turgot club invited by Cerfrance
On 25 November 2025, the Turgot club, represented by J-J.Pluchart, was invited by the Cerfrance network to lead a conference-debate on the current challenges of AI for productive companies and professional associations. The Cerfrance organisation brings together 700 agencies, 14,000 experts and 320,000 clients “committed to economic, human and sustainable performance”. After having retraced the cycles of development of AI, successively “weak” then “strong”, the functionalities and the trades generated by AI were analysed. The AI “technology stack” now comprises three layers: systemic AI (semiconductors, networks, data centres), functional AI (recognition, data management, processing, storage, security), and agentic AI (Text: translations, article writing, summaries, scripts, automatic responses; Image: artistic creation, graphic design, illustrations; Video and audio: generation of synthetic voices, music, animations; Code: programming assistance, generation of scripts or functions). More and more sectors of activity must adapt their work organisation, in particular health, defence, banking and insurance, cybersecurity, industrial production (cobotics), mobility, publishing, cinema and media, education, research, programming, etc. Strategic functions, crisis management and local services are a priori spared. The adaptation proposals formulated in the reports of Villani (2018), Draghi and Letta (2023), Aghion-Bouverot (2024) and the AI Summit of February 2025 (AI “power lever”) were presented. The issues raised by the development of AI were then discussed: AI and national sovereignty, the contribution of AI to productivity and economic growth, the impact of data centres on GHG emissions, the financing of investments in AI R&D, the profitability of general AI models, the AI “stock market bubble”, the destruction and transformation of jobs, the dominant positions and cooperation agreements of GAFAM, the ethical codes of AI… It appears that the legal problems differ according to the phases of the process of value creation by AI: the collection of data by AI agents, learning by reinforcement models and the exploitation of applications thanks to user questions (prompts). The conference also focused on the general and specific biases of AI models: technical and psychological biases (perceptual, emotional and cognitive), specific biases of generative AI (related to linguistic disparities and implicit character), voluntary biases (simulations, manipulations, falsifications, intrusions)*. The presentation was concluded with a reflection on the strategies to be implemented in order to transform AI into a competitive advantage, and in particular, on the new approaches to ‘augmented management’, the integration of management systems, the audit of ‘high AI quotient’ functions, new decision-making aids, benchmarking actions, nudging, assistance with quotes (Relief) and troubleshooting, quality control, cybersecurity, and training in AI and ethics. The conference was illustrated by use cases and references to the works and reports chronicled on clubturgot.com and analysed in the last book of the Turgot cub: New reflections on the wealth of nations. The lessons of Turgot and Smith. Jean-Jacques Pluchart *see review on AI and intellectual property
The Legacy of Turgot: Turgot’s Lessons
When one rereads the economic history of France, it seems that in every era, a voice has reminded us of the same simple truths: one must not spend what one does not have, one must not play with money, and one cannot build prosperity on promises or wagers. Turgot, Jacques Rueff, and Jean-Marc Daniel embodied this fidelity to reality. Each, in his own century, defended the idea that balancing public finances is not a constraint but a condition of freedom. Their legacy has endured through time, tracing a red thread from the eighteenth century to the present day. Turgot: Rigor as a Starting Point When Turgot was appointed Controller-General of Finances, he found a state in fiscal disorder. He rejected privileges and denounced waste. His guiding principle can be found in his Reflections on the Formation and Distribution of Wealth (1766), where he wrote: “We do not create wealth by distributing what we do not have.” This sentence, often quoted, expresses more a philosophy than an economic theory: rigor is not an accounting obsession, it is a moral requirement. For him, a deficit is an injustice passed on to future generations. In a France bogged down by rents and privileges, he defended the freedom to work, the free circulation of grain, and the abolition of forced labor. Jacques Rueff: The Guardian of Monetary Truth A century and a half later, Jacques Rueff took up the torch. He too lived in a world of illusions — those of a misinterpreted Keynesianism and of deficit financing through monetary creation. Alongside General de Gaulle, he helped lead the 1958 fiscal recovery of the French economy. For him, public debt was not just a number but a political fault. In The Social Order, he wrote: “No order can be built by defying the natural laws of the economy.” Budgetary balance, in his eyes, was an instrument of sovereignty: every deficit, every indulgence, led to dependency. He saw money as a moral instrument before being a financial one. In The Relentless Problem of Balance of Payments, he extended Turgot’s spirit: without fiscal discipline, there can be no lasting freedom. Rueff rejected fatalism. In Unemployment and Money, he demonstrated that unemployment results from accumulated rigidities. He advocated greater labor-market flexibility and the defense of free competition. Jean-Marc Daniel: Growth Through Freedom and Responsibility Jean-Marc Daniel stands in this same lineage. A liberal economist, he rebukes Keynesian complacency. His work belongs to a globalized, open economy where the temptation of protectionism and public spending remains strong. His intellectual mission: to remind us that sustainable growth rests on four essential pillars — work, saving, freedom (competition), and education. For him, rigor goes hand in hand with pedagogy. The State cannot produce wealth; it can only guarantee the conditions for it: security, justice, education, and monetary stability. Economics, in his eyes, is not a machine — it is a moral order founded on the truth of prices established by free competition and the reward of effort. In this sense, Daniel is a continuator of Turgot and Rueff: he denounces public budgetary excesses and insists that prosperity cannot be decreed — it must be learned. Philippe Aghion: Creative Continuity Philippe Aghion extends this heritage into another dimension — that of innovation. Where Turgot saw rigor as the condition of freedom, and Rueff viewed sound money as the keystone of prosperity, Aghion introduces the discipline of creativity. Inspired by Schumpeter, he formalized creative destruction: progress does not come from a spendthrift state but from a stable framework in which firms are free to innovate, fail, and begin again. Like his predecessors, Aghion does not oppose innovation and rigor — he connects them. Without strong institutions, quality education, and incentives for effort and investment, there can be no lasting progress. In this sense, he continues the spirit of Turgot and Rueff: freeing human energy while maintaining the discipline of rules. He also joins Daniel in emphasizing the importance of knowledge and education. Conclusion From Turgot to Rueff, from Daniel to Aghion, four voices, four centuries, one lesson: discipline — whether fiscal, monetary, or intellectual — is not optional; it is a political necessity. Economic disorder always prepares social disorder and, ultimately, leads to servitude, while rigor opens the path to freedom. Turgot’s thought has endured because it is rooted in reality and rejects illusions. Its legacy endures because success does not lie in mortgaging the future but in giving it every chance to exist free of servitude. To reread Turgot, Rueff, Daniel, and Aghion is not to yield to nostalgia; it is to rediscover, beneath today’s debates, the keys to lasting prosperity. Benoit Frayer November 2025 References Anne-Robert Jacques Turgot, Reflections on the Formation and Distribution of Wealth (1766). Jacques Rueff, The Social Order (1945); The Relentless Problem of Balance of Payments (1965); Unemployment and Money (1931). Jean-Marc Daniel, Capitalism and Its Enemies (2016); The Collusive State (2014); A Living History of Economic Thought (2018). Philippe Aghion, The Power of Creative Destruction (with Céline Antonin and Simon Bunel, 2020); Endogenous Growth Theory (with Peter Howitt, 2008). Joseph Schumpeter, Capitalism, Socialism and Democracy (1942) — theoretical foundation of “creative destruction,” later extended by Aghion. Benoit Frayer November 2025
Digital economy and violence in the workplace
Jean-Jacques Pluchart The digital economy, and in particular Artificial Intelligence, are often presented as freeing the worker from the most alienating tasks in favor of more creative actions, but they are also perceived as being able to generate a loss of meaning of action, professional malaise and violence at work. In the context of an organization, violence can take many forms: verbal and physical, psychological and social, symbolic and structural, which differ according to multiple factors: the activity carried out, the work situation, gender, but also according to the systems implemented, as in the case of digital technologies covering automation and expert systems, the Internet and social networks, symbolic and generative AI applications. Violence can be exercised between the actors themselves (between colleagues, between superiors and subordinates) and/or between the latter and the stakeholders (customers, users, suppliers, etc.) of the company or the administration, but it can also be caused by a procedure or a system. The most frequently cited form of violence against workers generated by AI is the fear of losing one’s job and thus being socially downgraded, or of having to adapt to a new job that is said to be “augmented” by AI. The future “robot-man” fears, in particular, being confronted with the ingratitude and loneliness of a job carried out remotely, alone in front of a screen, prey to the dysfunctions and “black boxes” of a system, and most often subjected to digital panoptism. They fear losing the meaning of their work, no longer recognizing their symbolic order, no longer knowing their professional identity. He fears being exposed to the stress and burnout of the ‘enslaved man’. This anxiety can be all the more depressive as he can no longer activate his defense systems (by denial, displacement, derision, sublimation…) against a “robot” whose grip is inevitable. The violence of this new relationship to work is all the more implicit as it is marked by the uncertainty weighing on the date and conditions of the implementation of the new system thus perceived as a ‘black swan’. The threat is all the more latent as it covers a growing number of jobs, ranging from back office (administration) to middle office (production and control) and front office (customer relations, etc.). It now reaches managers and executives responsible for reorganizing a company or a service, arbitrating between often complex operating systems, ensuring their cyber security and training staff in new practices. They are thus exposed to new types of risks to the sustainability of their organizations and to the future of their own careers. These incomplete observations show that the forms of violence at work generated by the accelerated development of AI can only be detected, analyzed and framed by HRM approaches using psychology and sociology, but also anthropology and psychoanalysis.