• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site

HSE University in Perm hosted a seminar on the RSF project "Comparative Analysis of AI and Real Individuals in Economic Decisions."

On June 24, 2025 at the HSE University in Perm began a summer series of workshops under RNF Grant 25-18-00539: “Comparative Analysis of Behavior of Artificial Intelligence-Based Agents and Real Individuals in Economic Decision-Making” led by Petr Parshakov.

HSE University in Perm hosted a seminar on the RSF project "Comparative Analysis of AI and Real Individuals in Economic Decisions."

Freepik

The grant topic addresses one of the most pressing scientific challenges of recent years: Can agents based on large language models (LLMs) simulate human behavior in economic scenarios? How do their decisions change depending on the context? To what extent are their decision-making processes even "human-like"?

There are relatively few studies in this field so far, and most of them are preprints — scientific texts at the initial approval stage. That is why we decided to hold a series of seminars to analyze the most relevant international papers through live discussion, to understand the direction in which the global scientific agenda is developing.

The first workshop presented three studies:

  • Dmitry Dagaev discussed the preprint, "Large Language Models as Simulated Economic Agents: What Can We Learn from Homo Silicus?" (J. J. Horton, 2023), which asks whether large language models (LLMs) can be considered economic agents, albeit artificial ones with their own decision-making logic.
  • Evgeniya Shenkman presented a review of the article "Can Machines Think Like Humans?" A Behavioral Evaluation of LLM-Agents in Dictator Games (Ma, 2024), in which the behavior of LLM agents is analyzed in the context of a dictator game, a classic experiment in behavioral economics. The article raises the complex and ambiguous question of how valid the parallels between AI and human behavior are in such conditions.
  • Petr Parshakov presented a review of the text "Can Large Language Model Agents Simulate Human Trust Behavior?" (Xie et al., 2025). Based on the BDI (belief-desire-intention) architecture, the review investigates the ability of models to reproduce trust-related behavior. Initial results show that LLM agents can partially simulate human trust behaviors in certain aspects.

Each of the presented studies contributed to forming a common understanding of this new scientific field. The first paper established a theoretical framework and posed the key question of whether LLMs can be considered economic agents. The second paper highlighted the methodological challenges of comparing AI and human behavior and demonstrated the ambiguity of the results of such comparisons. The third paper addressed the important topic of trust modeling and showcased the application of the promising cognitive architecture, BDI, in economic scenarios.

These workshops will continue in summer 2025 and are open to anyone who is interested. If you are interested, please email Egor Nazarovskiy. Email - EBNazarovskii@hse.ru