Research

Research interests: applied econometrics, machine learning, environmental economics.

  1. Fernandes Machado, A., Charpentier, A., Flachaire, E., Gallic, E., Hu, F. Probabilistic Scores of Classifiers, Calibration is not Enough ⟨arXiv:2408.03421⟩
    Abstract and links

    When it comes to quantifying the risks associated with decisions made using a classifier, it is essential that the scores returned by the classifier accurately reflect the underlying probability of the event in question. The model must then be well-calibrated. This is particularly relevant in contexts such as assessing the risks of payment defaults or accidents, for example. Tree-based machine learning techniques like random forests and XGBoost are increasingly popular for risk estimation in the industry, though these models are not inherently well-calibrated. Adjusting hyperparameters to optimize calibration metrics, such as the Integrated Calibration Index (ICI), does not ensure score distribution aligns with actual probabilities. Through a series of simulations where we know the underlying probability, we demonstrate that selecting a model by optimizing Kullback-Leibler (KL) divergence should be a preferred approach. The performance loss incurred by using this model remains limited compared to that of the best model chosen by AUC. Furthermore, the model selected by optimizing KL divergence does not necessarily correspond to the one that minimizes the ICI, confirming the idea that calibration is not a sufficient goal. In a real-world context where the distribution of underlying probabilities is no longer directly observable, we adopt an approach where a Beta distribution a priori is estimated by maximum likelihood over 10 UCI datasets. We show, similarly to the simulated data case, that optimizing the hyperparameters of models such as random forests or XGBoost based on KL divergence rather than on AUC allows for a better alignment between the distributions without significant performance loss. Conversely, minimizing the ICI leads to substantial performance loss and suboptimal KL values.

    bib entry
    @misc{fernandesmachado_2024_probabilistic,
          title={Probabilistic Scores of Classifiers, Calibration is not Enough}, 
          author={Fernandes Machado, Agathe and Charpentier, Arthur and Flachaire, Emmanuel and Gallic, Ewen and Hu, Fran\c{c}ois},
          year={2024},
          eprint={2408.03421},
          archivePrefix={arXiv},
          primaryClass={cs.LG},
          url={https://arxiv.org/abs/2408.03421}, 
          doi={10.48550/arXiv.2408.03421}
    }
  2. Dovis, M., Ferrière, N., Gallic, E. Weather-induced Domestic Violence
  1. Crofils, C., Gallic, E. & Vermandel, G. (2025). The Dynamic Effects of Weather Shocks on Agricultural Production. Journal of Environmental Economics and Management, 130, 103078. doi: 10.1016/j.jeem.2024.103078
    Abstract and links

    This paper proposes a new methodological approach using high-frequency data and local projections to assess the impact of weather on agricultural production. Local projections capture both immediate and delayed effects across crop types and growth stages, while providing early warnings for food shortages. Adverse weather shocks, such as excess heat or rain, consistently lead to delayed downturns in production, with heterogeneous effects across time, crops, and seasons. We build a new index of aggregate weather shocks that accounts for the typical delay between event occurrence and economic recognition, finding that these shocks are recessionary at the macroeconomic level, reducing inflation, production, exports and exchange rates.

    bib entry
    @article{
    title = {The dynamic effects of weather shocks on agricultural production},
    journal = {Journal of Environmental Economics and Management},
    volume = {130},
    pages = {103078},
    year = {2025},
    issn = {0095-0696},
    doi = {10.1016/j.jeem.2024.103078},
    url = {https://www.sciencedirect.com/science/article/pii/S0095069624001529},
    author = {Crofils, C\'edric and Gallic, Ewen and Vermandel, Gauthier},
    keywords = {Weather shocks, Agriculture, Local projections, VAR},
    }
  2. Ba, A. S., Gallic, E., Michel, P. & Paraponaris, A.. (2024). The Imaginary Healthy Patient. (forthcoming in Revue d’économie politique), 134(6).
    Abstract and links

    Anxiety and depression may have serious disabling consequences for health, social, and occupational outcomes for people who are unaware of their actual health status and/or whose mental health symptoms remain undiagnosed by physicians. This article provides a big picture of unrecognised anxiety and depressive troubles revealed by a low score on the Mental Health Inventory-5 (MHI-5) with the help of machine learning methods using the 2012 French National Representative Health and Social Protection Survey (Enquête Santé et Protection Sociale, ESPS) matched with yearly healthcare consumption data from the French Sickness Fund. Compared to people with no latent symptoms who did not declare any depression over the last 12 months, those with unrecognised anxiety or depression were found to be older, more deprived, more socially disengaged, at a higher probability of adverse working conditions, and with higher healthcare expenditures backed, to some extent, by chronic conditions other than anxiety or mood disorder.

    bib entry
    @article{
    title = {The Imaginary Healthy Patient},
    journal = {Revue d'économie politique},
    volume = {134},
    number = {6},
    pages = {},
    year = {2024},
    author = {Ba, Amady Seydou and Gallic, Ewen and Michel, Pierre and Paraponaris, Alain},
    keywords = {unrecognised mental disorders; mental health inventory-5 (MHI-5); healthcare consumption; workplace outcomes; tree-based methods; SHAP values},
    }
  3. Dufrénot, G., Gallic, E., Michel, P., Bonou, N. M., Gnaba, S. & Slaoui, I. (2024). Impact of socio-economic determinants on the speed of epidemic diseases. A comparative analysis. Oxford Economic Papers, 76(4), 1089–1107, doi: 10.1093/oep/gpae003
    Abstract and links

    We study the impact of socioeconomic factors on two key parameters of epidemic dynamics. Specifically, we investigate a parameter capturing the rate of deceleration at the very start of an epidemic, and a parameter that reflects the pre-peak and post-peak dynamics at the turning point of an epidemic like COVID-19. We find two important results. The policies to fight COVID-19 (such as social distancing and containment) have been effective in reducing the overall number of new infections, because they influence not only the epidemic peaks, but also the speed of spread of the disease in its early stages. The second important result of our research concerns the role of healthcare infrastructure. They are just as effective as anti-COVID policies, not only in preventing an epidemic from spreading too quickly at the outset, but also in creating the desired dynamic around peaks: slow spreading, then rapid disappearance.

    bib entry
    @article{10.1093/oep/gpae003,
        author = {Dufr{\'e}not, Gilles and Gallic, Ewen and Michel, Pierre and Bonou, Norgile Midopk{\`e} and Gnaba, S{\'e}gui and Slaoui, Iness},
        title = "{Impact of socioeconomic determinants on the speed of epidemic diseases: a comparative analysis}",
        journal = {Oxford Economic Papers},
        volume = {76},
        number = {4},
        pages = {1089–1107},
        year = {2024},
        month = {10},
        issn = {0030-7653},
        doi = {10.1093/oep/gpae003},
        url = {https://doi.org/10.1093/oep/gpae003},
        eprint = {https://academic.oup.com/oep/advance-article-pdf/doi/10.1093/oep/gpae003/56707735/gpae003.pdf},
    }
  4. Gallic, E., Lubrano, M. & Michel, P. (2021). Optimal lockdowns for COVID‐19 pandemics: Analyzing the efficiency of sanitary policies in Europe. Journal of Public Economic Theory, 24(5), 944–967. doi: 10.1111/jpet.12556
    Abstract and links

    Two main nonpharmaceutical policy strategies have been used in Europe in response to the COVID-19 epidemic: one aimed at natural herd immunity and the other at avoiding saturation of hospital capacity by crushing the curve. The two strategies lead to different results in terms of the number of lives saved on the one hand and production loss on the other hand. Using a susceptible–infected–recovered–dead model, we investigate and compare these two strategies. As the results are sensitive to the initial reproduction number, we estimate the latter for 10 European countries for each wave from January 2020 till March 2021 using a double sigmoid statistical model and the Oxford COVID-19 Government Response Tracker data set. Our results show that Denmark, which opted for crushing the curve, managed to minimize both economic and human losses. Natural herd immunity, sought by Sweden and the Netherlands does not appear to have been a particularly effective strategy, especially for Sweden, both in economic terms and in terms of lives saved. The results are more mixed for other countries, but with no evident trade-off between deaths and production losses.

    bib entry
    @article{Gallic_2021_jpet,
        doi = {10.1111/jpet.12556},
        url = {https://doi.org/10.1111%2Fjpet.12556},
        year = 2021,
        month = {nov},
        publisher = {Wiley},
        volume = {24},
        number = {5},
        pages = {944--967},
        author = {Gallic, Ewen and Lubrano, Michel and Michel, Pierre},
        title = {Optimal lockdowns for {COVID}-19 pandemics: Analyzing the efficiency of sanitary policies in Europe},
        journal = {Journal of Public Economic Theory}
    }
  5. Charpentier, A. & Gallic, E (2020). La démographie historique peut-elle tirer profit des données collaboratives des sites de généalogie ?. Population, 75(2), 391-421. doi : 10.3917/popu.2002.0391
    Abstract and links

    L’intérêt des démographes pour les données généalogiques n’est pas nouveau. Le développement de sites internet dédiés à la construction d’arbres généalogiques et à leur mutualisation relance à juste titre cet intérêt. Les auteurs de cet article évaluent la qualité des mesures de la dynamique démographique (fécondité, mortalité, migrations) à partir des données de l’un de ces sites. Les biais sont nombreux et la tendance générale est à une sous-estimation des phénomènes. C’est sans doute dans le domaine de l’étude des migrations internes que l’apport de ces données à la connaissance est le plus prometteur.

    bib entry
    @article{Charpentier_2020_population,
        doi = {10.3917/popu.2002.0391},
        url = {https://doi.org/10.3917%2Fpopu.2002.0391},
        year = 2020,
        month = {nov},
        publisher = {{CAIRN}},
        volume = {Vol. 75},
        number = {2},
        pages = {391--421},
        author = {Charpentier, Arthur and Gallic, Ewen},
        title = {La d{\'{e}}mographie historique peut-elle tirer profit des donn{\'{e}}es collaboratives des sites de g{\'{e}}n{\'{e}}alogie~?},
        journal = {Population}
    }
  6. Gallic, E & Vermandel, G. (2020). Weather Shocks. European Economic Review, 124, 103409. doi : 10.1016/j.euroecorev.2020.103409
    Abstract and links

    How much do weather shocks matter? The literature addresses this question in two isolated ways: either by looking at long-term effects through the prism of calibrated theoretical models, or by focusing on both short and long terms through the lens of empirical models. We propose a framework that reconciles these two approaches by taking the theory to the data in two complementary ways. We first document the propagation mechanism of a weather shock using a Vector Auto-Regressive model on New Zealand Data. To explain the mechanism, we build and estimate a general equilibrium model with a weather-dependent agricultural sector to investigate the weather’s business cycle implications. We find that weather shocks: (i) explain about 35% of GDP and agricultural output fluctuations in New Zealand; (ii) entail a welfare cost of 0.30% of permanent consumption; (iii) critically increases the macroeconomic volatility under climate change, resulting in a higher welfare cost peaking to 0.46% in the worst case scenario of climate change.

    bib entry
    @article{Gallic_2020_eer,
        doi = {10.1016/j.euroecorev.2020.103409},
        url = {https://doi.org/10.1016%2Fj.euroecorev.2020.103409},
        year = 2020,
        month = {may},
        publisher = {Elsevier {BV}},
        volume = {124},
        pages = {103409},
        author = {Gallic, Ewen and Vermandel, Gauthier},
        title = {Weather shocks},
        journal = {European Economic Review}
    }
  7. Charpentier, A. & Gallic, E (2019). Using collaborative genealogy data to study migration: a research note. The History of the Family, 25(1), 1-21. doi: 10.1080/1081602X.2019.1641130
    Abstract and links

    The digital age allows data collection to be done on a large scale and at low cost. This is the case of genealogy trees, which flourish on numerous digital platforms thanks to the collaboration of a mass of individuals wishing to trace their origins and share them with other users. The family trees constituted in this way contain information on the links between individuals and their ancestors, which can be used in historical demography, and more particularly to study migration phenomena. The case of 19th century France is taken as an example, using data from the family trees of 238,009 users of the Geneanet website, or 2.5 million (unique) individuals. Using the geographical coordinates of the birthplaces of 25,485 ancestors born in France between 1800 and 1804 and those of their descendants (24,516 children, 29,715 grandchildren and 62,165 great-grandchildren), we study migration between generations at several geographical scales. We start with a broad scale, that of the departments, to reach a much finer one, that of the cities. Our results are consistent with those of the literature traditionally based on the parish or civil status registers. The results show that the use of collaborative genealogy data not only makes it possible to support previous findings of the literature, but also to enrich them.

    bib entry
    @article{Charpentier_2019_historyfamily,
        doi = {10.1080/1081602x.2019.1641130},
        url = {https://doi.org/10.1080%2F1081602x.2019.1641130},
        year = 2019,
        month = {jul},
        publisher = {Informa {UK} Limited},
        volume = {25},
        number = {1},
        pages = {1--21},
        author = {Charpentier, Arthur and Gallic, Ewen},
        title = {Using collaborative genealogy data to study migration: a research note},
        journal = {The History of the Family}
    }
  8. Gallic, E, Poutineau, J.-C. & Vermandel, G. (2017). L’impact de la crise financière sur la performance de la politique monétaire conventionnelle de la zone Euro. Revue Économique, 68(HS1), 63-86. doi: 10.3917/reco.hs02.0063
    Abstract and links

    Cet article évalue dans quelle mesure la crise financière de 2007 a affecté la mise en œuvre d’une politique monétaire conventionnelle dans la zone euro. Cette question est abordée dans un cadre théorique reprenant le modèle de synthèse de la nouvelle économie keynésienne qui prévalait avant la crise de 2007. On observe que la crise a fortement réduit la performance de la politique conventionnelle après la détérioration de l’arbitrage entre la variance de l’inflation et celle de l’activité (telle que définie par la courbe de Taylor), et après la dégradation de son efficacité (telle que mesurée à partir de l’écart à la courbe de Taylor provenant d’une forte augmentation de la contribution de l’output gap). Les valeurs de taux d’intérêt simulées par notre modèle montrent que la BCE aurait dû fixer des taux d’intérêt inférieurs à ceux observés, qui plus est négatifs en fin de période. De nouveaux instruments non conventionnels s’avèrent de fait nécessaires afin de suppléer à une pratique de la politique monétaire qui était centrée prioritairement sur la stabilité des prix, dans un environnement macroéconomique calme.

    bib entry
    @article{Gallic_2017_revueeco,
        doi = {10.3917/reco.hs02.0063},
        url = {https://doi.org/10.3917%2Freco.hs02.0063},
        year = 2017,
        month = {sep},
        publisher = {{CAIRN}},
        volume = {Vol. 68},
        number = {{HS}1},
        pages = {63--86},
        author = {Gallic, Ewen and Poutineau, Jean-Christophe and Vermandel, Gauthier},
        title = {L'impact de la crise financi{\`{e}}re sur~la~performance de la politique mon{\'{e}}taire conventionnelle de la zone euro},
        journal = {Revue {\'{e}}conomique}
    }
  9. Charpentier, A. & Gallic, E. (2015). Kernel density estimation based on Ripley’s correction. GeoInformatica, 20, 95–116. doi: 10.1007/s10707-015-0232-z
    Abstract and links

    In this paper, we investigate a technique inspired by Ripley’s circumference method to correct bias of density estimation of edges (or frontiers) of regions. The idea of the method was theoretical and difficult to implement. We provide a simple technique – based of properties of Gaussian kernels – to efficiently compute weights to correct border bias on frontiers of the region of interest, with an automatic selection of an optimal radius for the method. We illustrate the use of that technique to visualize hot spots of car accidents and campsite locations, as well as location of bike thefts.

    bib entry
    @article{Charpentier_2015_geoinformatica,
        doi = {10.1007/s10707-015-0232-z},
        url = {https://doi.org/10.1007%2Fs10707-015-0232-z},
        year = 2015,
        month = {aug},
        publisher = {Springer Science and Business Media {LLC}},
        volume = {20},
        number = {1},
        pages = {95--116},
        author = {Charpentier, Arthur and Gallic, Ewen},
        title = {Kernel density estimation based on Ripley's correction},
        journal = {{GeoInformatica}}
    }
  1. Fernandes Machado, A., Charpentier, A., Gallic, E. (2025). Sequential Conditional (Marginally Optimal) Transport on Probabilistic Graphs for Interpretable Counterfactual Fairness. AAAI Conference on Artificial Intelligence. ⟨arXiv:2408.03425⟩ (new version soon)
    Abstract and links

    In this paper, we link two existing approaches to derive counterfactuals: adaptations based on a causal graph, as suggested in Plečko and Meinshausen (2020)} and optimal transport, as in De Lara et al. (2024). We extend “Knothe’s rearrangement” Bonnotte (2013) and “triangular transport” Zech and Marzouk (2022) to probabilistic graphical models, and use this counterfactual approach, referred to as sequential transport, to discuss fairness at the individual level. After establishing the theoretical foundations of the proposed method, we demonstrate its application through numerical experiments on both synthetic and real datasets.

    bib entry
    @misc{fernandesmachado_2024_sequential,
          title={Sequential Conditional Transport on Probabilistic Graphs for Interpretable Counterfactual Fairness}, 
          author={Fernandes Machado, Agathe and Charpentier, Arthur and Gallic, Ewen},
          year={2024},
          eprint={2408.03425},
          archivePrefix={arXiv},
          primaryClass={cs.LG},
          url={https://arxiv.org/abs/2408.03425}, 
          doi={10.48550/arXiv.2408.03425}
    }
  2. Fernandes Machado, A., Charpentier, A., Flachaire, E., Gallic, E., Hu, F. (2024). Post-Calibration Techniques: Balancing Calibration and Score Distribution Alignment. NeurIPS 2024 Workshop on Bayesian Decision-making and Uncertainty.
    Abstract and links

    A binary scoring classifier can appear well-calibrated according to standard calibration metrics, even when the distribution of scores does not align with the distribution of the true events. In this paper, we investigate the impact of post-processing calibration on the score distribution (sometimes named ‘recalibration’). Using simulated data, where the true probability is known, followed by real-world datasets with prior knowledge on event distributions, we compare the performance of an XGBoost model before and after applying calibration techniques. The results show that while applying methods such as Platt scaling, Beta calibration, or isotonic regression can improve the model’s calibration, they may also lead to an increase in the divergence between the score distribution and the underlying event probability distribution.

    bib entry
    @inproceedings{
    machado2024postcalibration,
    title={Post-Calibration Techniques: Balancing Calibration and Score Distribution Alignment},
    author={Fernandes Machado, Agathe and Charpentier, Arthur and Flachaire, Emmanuel and Gallic, Ewen and Hu, Fran\c{c}ois},
    booktitle={NeurIPS 2024 Workshop on Bayesian Decision-making and Uncertainty},
    year={2024},
    }
  1. Charpentier, A., Flachaire, E. & Gallic, E. (2024). Optimal Transport for Counterfactual Estimation: A Method for Causal Inference. In: Ngoc Thach, N., Kreinovich, V., Ha, D.T., Trung, N.D. (eds) Optimal Transport Statistics for Economics and Related Topics. Studies in Systems, Decision and Control, vol 483. Springer, Cham. doi: 10.1007/978-3-031-35763-3_3
    Abstract and links

    Many problems ask a question that can be formulated as a causal question: what would have happened if…? For example, would the person have had surgery if he or she had been Black? To address this kind of questions, calculating an average treatment effect (ATE) is often uninformative, because one would like to know how much impact a variable (such as the skin color) has on a specific individual, characterized by certain covariates. Trying to calculate a conditional ATE (CATE) seems more appropriate. In causal inference, the propensity score approach assumes that the treatment is influenced by \(x\), a collection of covariates. Here, we will have the dual view: doing an intervention, or changing the treatment (even just hypothetically, in a thought experiment, for example by asking what would have happened if a person had been Black) can have an impact on the values of \(x\). We will see here that optimal transport allows us to change certain characteristics that are influenced by the variable whose effect we are trying to quantify. We propose here a mutatis mutandis version of the CATE, which will be done simply in dimension one by saying that the CATE must be computed relative to a level of probability, associated to the proportion of x (a single covariate) in the control population, and by looking for the equivalent quantile in the test population. In higher dimension, it will be necessary to go through transport, and an application will be proposed on the impact of some variables on the probability of having an unnatural birth (the fact that the mother smokes, or that the mother is Black).

    bib entry
    @Inbook{Charpentier2024Optimal,
        author="Charpentier, Arthur
        and Flachaire, Emmanuel
        and Gallic, Ewen",
        editor="Ngoc Thach, Nguyen and Kreinovich, Vladik and Ha, Doan Thanh and Trung, Nguyen Duc",
        title="Optimal Transport for Counterfactual Estimation: A Method for Causal Inference",
        bookTitle="Optimal Transport Statistics for Economics and Related Topics",
        year="2024",
        publisher="Springer Nature Switzerland",
        address="Cham",
        pages="45--89",
        isbn="978-3-031-35763-3",
        doi="10.1007/978-3-031-35763-3_3",
        url="https://doi.org/10.1007/978-3-031-35763-3_3"
    }
  1. Fernandes Machado, A., Charpentier, A., Flachaire, E., Gallic, E., Hu, F. From Uncertainty to Precision: Enhancing Binary Classifier Performance through Calibration ⟨arXiv:2402.07790⟩
    Abstract and links

    The assessment of binary classifier performance traditionally centers on discriminative ability using metrics, such as accuracy. However, these metrics often disregard the model’s inherent uncertainty, especially when dealing with sensitive decision-making domains, such as finance or healthcare. Given that model-predicted scores are commonly seen as event probabilities, calibration is crucial for accurate interpretation. In our study, we analyze the sensitivity of various calibration measures to score distortions and introduce a refined metric, the Local Calibration Score. Comparing recalibration methods, we advocate for local regressions, emphasizing their dual role as effective recalibration tools and facilitators of smoother visualizations. We apply these findings in a real-world scenario using Random Forest classifier and regressor to predict credit default while simultaneously measuring calibration during performance optimization.

    bib entry
    @misc{fernandesmachado_2024_uncertainty,
          title={From Uncertainty to Precision: Enhancing Binary Classifier Performance through Calibration}, 
          author={Fernandes Machado, Agathe and Charpentier, Arthur and Flachaire, Emmanuel and Gallic, Ewen and Hu, Fran\c{c}ois},
          year={2024},
          eprint={2402.07790},
          archivePrefix={arXiv},
          primaryClass={cs.LG}
    }
  2. Fernandes Machado, A., Hu, F., Ratz, P., Gallic, E., Charpentier, A. Geospatial Disparities: A Case Study on Real Estate Prices in Paris ⟨arXiv:2401.16197⟩
    Abstract and links

    Driven by an increasing prevalence of trackers, ever more IoT sensors, and the declining cost of computing power, geospatial information has come to play a pivotal role in contemporary predictive models. While enhancing prognostic performance, geospatial data also has the potential to perpetuate many historical socio-economic patterns, raising concerns about a resurgence of biases and exclusionary practices and their disproportionate impacts on society. Addressing this, our paper emphasizes the crucial need to identify and rectify such biases and calibration errors in predictive models, particularly as algorithms become more intricate and less interpretable. The increasing granularity of geospatial information further introduces ethical concerns, as choosing different geographical scales may exacerbate disparities akin to redlining and exclusionary zoning. To address these issues, we propose a toolkit for identifying and mitigating biases arising from geospatial data. Extending classical fairness definitions, we incorporate an ordinal regression case with spatial attributes, deviating from the binary classification focus. This extension allows us to gauge disparities stemming from data aggregation levels and advocates for a less interfering correction approach. Illustrating our methodology using a Parisian real estate dataset, we showcase practical applications and scrutinize the implications of choosing geographical aggregation levels for fairness and calibration measures.

    bib entry
    @misc{fernandesmachado_2024_geospatial,
          title={Geospatial Disparities: A Case Study on Real Estate Prices in Paris}, 
          author={Fernandes Machado, Agathe and Hu, Fran\c{c}ois and Ratz, Philipp and Gallic, Ewen and Charpentier, Arthur},
          year={2024},
          eprint={2401.16197},
          archivePrefix={arXiv},
          primaryClass={cs.LG}
    }
  3. Cabrignac, O., Charpentier, A. & Gallic, E. (2020). Modeling Joint Lives within Families. ⟨halshs-02871927⟩.
    Abstract and links

    Family history is usually seen as a significant factor insurance companies look at when applying for a life insurance policy. Where it is used, family history of cardiovascular diseases, death by cancer, or family history of high blood pressure and diabetes could result in higher premiums or no coverage at all. In this article, we use massive (historical) data to study dependencies between life length within families. If joint life contracts (between a husband and a wife) have been long studied in actuarial literature, little is known about child and parents dependencies. We illustrate those dependencies using 19th century family trees in France, and quantify implications in annuities computations. For parents and children, we observe a modest but significant positive association between life lengths. It yields different estimates for remaining life expectancy, present values of annuities, or whole life insurance guarantee, given information about the parents (such as the number of parents alive). A similar but weaker pattern is observed when using information on grandparents.

    bib entry
    @unpublished{cabrignac:halshs-02871927,
      TITLE = {{Modeling Joint Lives within Families}},
      AUTHOR = {Cabrignac, Olivier and Charpentier, Arthur and Gallic, Ewen},
      URL = {https://shs.hal.science/halshs-02871927},
      NOTE = {working paper or preprint},
      HAL_LOCAL_REFERENCE = {Working paper AMSE 2020-21},
      YEAR = {2020},
      MONTH = Jun,
      KEYWORDS = {annuities ; collaborative data ; dependence ; family history ; genealogy ; grandparents-grandchildren ; information ; joint life insurance ; parents-children ; whole life insurance},
      PDF = {https://shs.hal.science/halshs-02871927/file/WP%202020%20-%20Nr%2021.pdf},
      HAL_ID = {halshs-02871927},
      HAL_VERSION = {v1},
    }
  4. Gallic, E. & Malardé, V. (2018). Airbnb in Paris : quel impact sur l’industrie hôtelière ?. ⟨halshs-01838059⟩.
    Abstract and links

    Dans un grand nombre de villes à travers le monde, les plateformes de logement court terme sont devenues une alternative aux yeux des touristes. Ces nouveaux acteurs, Airbnb en tête, bouleversent le marché, suscitant des inquiétudes de la part de l’industrie hôtelière et des pouvoirs publics. En utilisant des données d’hôtels et d’Airbnb à Paris, cet article propose une nouvelle méthodologie pour mesurer la pression concurrentielle exercée par Airbnb sur l’industrie hôtelière. Les résultats indiquent qu’une augmentation du nombre d’offreurs Airbnb à proximité d’un hôtel conduit celui-ci à diminuer son prix. Cet effet est amplifié les soirs de week-end.

    bib entry
    @unpublished{gallic:halshs-01838059,
      TITLE = {{Airbnb in Paris : quel impact sur l'industrie h{\^o}teli{\`e}re?}},
      AUTHOR = {Gallic, Ewen and Malard{\'e}, Vincent},
      URL = {https://shs.hal.science/halshs-01838059},
      NOTE = {working paper or preprint},
      YEAR = {2018},
      MONTH = Jul,
      KEYWORDS = {peer-to-peer platforms ;  hotel industry  ;  Airbnb ;  competition  ;  spatial statistics},
      PDF = {https://shs.hal.science/halshs-01838059/file/2018-07.pdf},
      HAL_ID = {halshs-01838059},
      HAL_VERSION = {v1},
    }
  5. Briatte, F. & Gallic, E. (2018). Recovering the French Party Space from Twitter Data. ⟨halshs-01511384v1⟩
    Abstract and links

    This study explores the possibility to retrieve information on partisan polarization from data generated by online social media users. The specific application that we pursue consists in placing a sample of over 1,000 French politicians on a unidimensional left-right scale by using their followers on Twitter as a proxy for their relative ideological positions. The methodology that we use to that end closely replicates that of Barberá (2015), who developed a Bayesian Spatial Following model to retrieve such ideal point estimates in the United States and in five European countries. Our results concur with existing measures of the French party space, and yield additional insights into the behaviour of ideologically extreme social media users.

    bib entry
    @inproceedings{briatte_halshs-01511384,
      TITLE = {{Recovering the French Party Space from Twitter Data}},
      AUTHOR = {Briatte, Fran{\c c}ois and Gallic, Ewen},
      URL = {https://shs.hal.science/halshs-01511384},
      BOOKTITLE = {{Science Po Quanti}},
      ADDRESS = {Paris, France},
      YEAR = {2015},
      MONTH = May,
      KEYWORDS = {network analysis ; political polarization ; France ; Twitter ; polarisation politique ; analyse de r{\'e}seaux},
      PDF = {https://shs.hal.science/halshs-01511384/file/Briatte-Gallic-2015-SPQ-paper.pdf},
      HAL_ID = {halshs-01511384},
      HAL_VERSION = {v1},
    }
  1. Charpentier, A. & Gallic, E. (2024). Croissance, décroissance, de quoi parle-t-on ? Risques.
    Abstract and links

    Fin du monde, fin du mois, même combat », peut-on lire régulièrement sur des pancartes lors de diverses manifestations, mais aussi comme titre de leçon inaugurale au Collège de France de l’économiste Christian Gollier, rappelant que le changement climatique et l’économie se font face dans un combat qui s’annonce sanglant. La « croissance » semble être un élément clé dans ce combat, mais ce dernier restera probablement vain tant que ce terme ne sera pas clairement discuté, pour permettre de quitter des tranchées souvent dogmatiques.

    bib entry
    @article{Charpentier_2024_Risques,
    author = "Charpentier, Arthur and Gallic, Ewen",
    journal = "Risques",
    number = "138",
    title = "Croissance, décroissance, de quoi parle-t-on ?",
    year = "2024"
    }
  2. Charpentier, A. & Gallic, E. (2021). Intelligence collective et données. Risques, 126.
    Abstract and links

    La psychologie des foules a longtemps insisté sur les dangers des comportements collectifs, à commencer par Charles Mackay, qui affirmait en 1841, « les hommes, a-t-on bien dit, pensent en troupeaux ; on verra qu’ils deviennent fous en troupeaux, alors qu’ils ne recouvrent leurs sens que lentement, et un par un ». Cinquante ans après, Gustave Le Bon reprenait cette vision en écrivant : « l’individu se trouve altéré par la foule, devient surtout soumis à l’inconscient, et régresse vers un stade primaire de l’humanité ». Mais on redécouvre depuis quelques années qu’il est possible, au contraire, de mettre à profit la « sagesse des foules » pour reprendre l’expression de James Surowiecki.

    bib entry
    @article{Charpentier_2021_Risques,
    author = "Charpentier, Arthur and Gallic, Ewen",
    journal = "Risques",
    number = "126",
    title = "Intelligence collective et données.",
    year = "2021"
    }
  3. Charpentier, A., Barry, L. & Gallic, E. (2020). Quel avenir pour les probabilités prédictives en assurance ?. Annales des Mines, 2020(1), 74–77.
    Abstract and links

    Les polices d’assurance sont des exemples classiques de contrats aléatoires, ce qui force les assureurs à devoir quantifier régulièrement cette incertitude, à calculer des probabilités pour proposer des primes « justes » au regard des engagements qu’ils vont prendre. N’est-il pas temps de s’interroger sur cette pratique à l’heure de l’explosion de l’intelligence artificielle, qui propose des algorithmes prédictifs d’une précision jusqu’alors jamais vue, à l’heure d’un Big Data/Big Brother qui pourrait signifier la disparition même de l’incertitude ?

    bib entry
    @article{Charpentier_2020_AnnalesMines,
    doi = {10.3917/rindu1.201.0074},
    url = {https://doi.org/10.3917/rindu1.201.0074},
    author = "Charpentier, Arthur and Barry, Laurence and Gallic, Ewen",
    fjournal = "Annales des Mines",
    journal = "Annales des Mines",
    volume = "2020",
    number = "1",
    pages = "74--77",
    title = "Quel avenir pour les probabilités prédictives en assurance ?",
    year = "2020"
    }