Course list

Nonlinear regression models are essential for capturing complex relationships between predictor and response variables that linear regression cannot adequately describe. In this course, you will engage with the theoretical foundations of these models, gain practical experience in their application, and develop the skills necessary to interpret and evaluate their results. This course is designed to equip you with a comprehensive understanding of nonlinear regression models, with a focus on polynomial regression, splines, and generalized additive models (GAMs).

In this course, you will explore strategies for incorporating categorical predictors in a regression model, including using dummy variables to represent different categories. You will inspect binary and nonbinary categorical variables and discover how to interpret the estimated coefficients of dummy variables.

As you progress through the course, you will practice modeling and interpreting interactions between categorical and quantitative predictors in a linear model. Finally, you will focus on defining and implementing decision trees, which are advantageous for capturing complex interactions between predictors that linear models may be unable to capture. By the end of the course, you will be equipped to transform categorical variables into numerical variables, fit regression models with categorical predictors, interpret dummy variable coefficients, and use decision trees for modeling complex relationships between predictors.

You are required to have completed the following courses or have equivalent experience before taking this course:

  • Nonlinear Regression Models

The goal of this course is to introduce you to the fundamental concepts and techniques used in predictive modeling. Throughout this course, you will evaluate the balance between model flexibility and interpretability, examine how to select the best parameters using cross-validation, and practice building models that generalize well to new data. You will also explore techniques for splitting datasets, selecting tuning parameters, and fitting models using loss functions. By the end of the course, you will have a solid understanding of model flexibility, interpretability, and the bias-variance trade-off, equipping you to effectively build and evaluate predictive models.

You are required to have completed the following courses or have equivalent experience before taking this course:

  • Nonlinear Regression Models
  • Modeling Interactions Between Predictors

When working with real-world datasets, more than a single model may be required to capture the complexity of the data. Ensemble methods prove to be extremely useful with complex datasets by allowing us to combine simpler models to fully grasp the patterns in the data, thereby improving the predictive power of the models.

In this course, you'll discover how to use two ensemble methods: random forests and boosted decision trees. You'll practice these ensemble methods with datasets in R and apply the ensemble techniques you've learned to build robust predictive models. You'll practice improving decision tree performance using random forest models and practice interpreting those models. You'll then use another technique and apply boosting to reduce errors and aggregate predictions to decision trees.

You are required to have completed the following courses or have equivalent experience before taking this course:

  • Nonlinear Regression Models
  • Modeling Interactions Between Predictors
  • Foundations of Predictive Modeling

How It Works

Watch the Video

Request Information Now by completing the form below.

Act today—courses are filling fast.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.