3. Related work
Research focused on overcoming the limitations of GPs is vast, from improving scalability (Liu et al., 2020a), to overcoming the model selection problem (Liu et al., 2020b; Simpson et al., 2021). However, these methods are usually not beginner-friendly and in most cases applied to clean and well-studied benchmark datasets. Previous work on the democratisation of GPs is centred around creating an Automatic Statistician (Steinruecken et al., 2019) which takes in data and outputs results and a model fit in natural language. This framework builds on Automated Bayesian Covariance Discovery (Lloyd et al., 2014) which uses the Bayesian Information Criterion to brute force the design of sensible kernel function. However, this endeavour sets aside one of the main advantages of GPs: incorporating prior knowledge. It also does not educate modellers about how to use GPs and therefore properly interpret their results.
Instead, our work follows a similar structure to other data science and machine learning guidelines with supporting code and examples to empower the deployment of GPs in the real world (Yu and Kumbier, 2020; Bell et al., 2022).