Machine Learning Operations (MLOps), the task of coordinating machine learning projects with multiple models and team members, is growing in importance and interest. Cloud computing resources are a popular option in this scenario due to many reasons like easily accessible computing resources, billing by usage time and further available services like a fully managed en- vironment. Two approaches to monitor models in an MLOps environment are compared by using two popular statistical time series forecasting mod- els and two datasets from a widely known forecasting competition. One approach is the default Amazon Web Services (AWS) model drift monitor- ing and the other is a tracking signal monitoring. The goal is to reduce economical and ecological costs generated by retraining deployed models with more recent data in a cloud environment. The tracking signal monitor- ing is shown to serve as a more generic approach which can reduce costs when a decreased model performance is accepted for lower training costs. The AWS monitoring with in-sample error metrics as monitoring threshold used as retraining trigger shows a better performance at a comparable level of retraining counts.