Selected Publications

The introduction of new “machine learning” methods and terminology to political science complicates the interpretation of results. Even more so, when one term – like cross-validation – can mean very different things. We find different meanings of cross-validation in applied political science work. In the context of predictive modeling, cross-validation can be used to obtain an estimate of true error or as a procedure for model tuning. Using a single cross-validation procedure to obtain an estimate of the true error and for model tuning at the same time leads to serious misreporting of performance measures. We demonstrate the severe consequences of this problem with a series of experiments. We also observe this problematic usage of cross-validation in applied research. We look at Muchlinski et al. (2016) on the prediction of civil war onsets to illustrate how the problematic cross-validation can affect applied work. Applying cross-validation correctly, we are unable to reproduce their findings. We encourage researchers in predictive modeling to be especially mindful when applying cross-validation.
Forthcoming, Political Analysis, 2018.

We offer a dynamic Bayesian forecasting model for multi-party elections. It com- bines data from published pre-election public opinion polls with information from fundamentals-based forecasting models. The model takes care of the multi-party nature of the setting and allows making statements about the probability of other quantities of interest, such as the probability of a plurality of votes for a party or the majority for certain coalitions in parliament. We present results from two ex ante forecasts of elections that took place in 2017 and are able to show that the model outperforms fundamentals-based forecasting models in terms of accuracy and the calibration of uncertainty. Provided that historical and current polling data are available, the model can be applied to any multi-party setting.
Forthcoming, Political Analysis, 2018.

Recent Publications & Work in Progress

How Cross-Validation Can Go Wrong and What to Do About it.

Details PDF Code Dataset

Forecasting Elections in Multi-Party Systems: A Bayesian Approach Combining Polls and Fundamentals

Details PDF Code Dataset

A Partisan Treatment in a High Salience Election: Evidence from a Field Experiment in Germany

Details PDF Code Dataset A structural-dynamic forecasting model for German federal elections

Details PDF Code Dataset


I am a teaching instructor for the following courses at University of Mannheim:

  • Spring 2018: Tutorial Advanced Quantitative Methods, Graduate (in English), Syllabus, Evaluation
  • Fall 2017: Tutorial Multivariate Analyses, Graduate (in English), Syllabus, Evaluation
  • Spring 2017: Tutorial Advanced Quantitative Methods, Graduate (in English), Syllabus, Evaluation
  • Fall 2016: Tutorial Multivariate Analyses, Graduate (in English), Syllabus, Evaluation


Wonderful! This tutorial and it’s corresponding course were my favorite. Marcel is a great teacher, a great speaker, and creates a great classroom environment. He is very supportive and encouraging. I always enjoyed attending and wish there were future tutorials and courses to attend.

Marcel is an excellent tutor who knows his stuff very well an animates us students to further engage with quantitative methods. Great Job!

Marcel was one of the best tutors I had in my 5 years at German universities. He was very helpful, open for questions, friendly towards students and easy to approach.

Excellent course. I felt myself getting more and more employable from one session to the next. Really cool stuff we learn!

Furthermore, I have been teaching at University of Applied Sciences Ludwigshafen:

Besides that, I am also a instructor of professional training workshops:

  • February 2018: Introduction to R, 1 day workshop, Geschäftsstelle für Qualitätssicherung Hessen, Frankfurt