7 Courses
This course begins by helping you reframe real-world problems in terms of supervised machine learning. Through understanding the “ingredients” of a machine learning problem, you will investigate how to implement, evaluate, and improve machine learning algorithms. Ultimately, you will implement the k-Nearest Neighbors (k-NN) algorithm to build a face recognition system. Tools like the NumPy Python library are introduced to assist in simplifying and improving Python code.

In this course, you will use the Maximum Likelihood Estimate (MLE) to approximate distributions from data. Using the Bayes Optimal Classifier, you will learn how the assumptions you make will impact your estimations. You will then learn to apply the Naive Bayes Assumption to estimate probabilities for problems that contain a high number of dimensions. Ultimately, you will apply this understanding to implement the Naive Bayes Classifier in order to build a name classification system.

The course Problem-Solving with Machine Learning is required to be completed prior to starting this course.

In this course, you are introduced to and implement the Perceptron algorithm, a linear classifier that was developed at Cornell in 1957. Through the exploration of linear and logistic regression, you will learn to estimate probabilities that remain true to the problem settings. By using gradient descent, we minimize loss functions. Ultimately, you will apply these skills to build a email spam classifier.

The courses Problem-Solving with Machine Learning and Estimating Probability Distributions are required to be completed prior to starting this course.

In this course, you will be introduced to the classification and regression trees (CART) algorithm. By implementing CART, you will build decision trees for a supervised classification problem. Next, you will explore how the hyperparameters of an algorithm can be adjusted and what impact they have on the accuracy of a predictive model. Through this exploration, you will practice selecting an appropriate model for a problem and dataset. You will then load a live dataset, select a model, and train a classifier to make predictions on that data.

The courses Problem-Solving with Machine Learning, Estimating Probability Distributions, and Learning with Linear Classifiers are required to be completed prior to starting this course.

In this course, you will investigate the underlying mechanics of a machine learning algorithm’s prediction accuracy by exploring the bias variance trade-off. You will identify the causes of prediction error by recognizing high bias and variance while learning techniques to reduce the negative impacts these errors have on learning models. Working with ensemble methods, you will implement techniques that improve the results of your predictive models, creating more reliable and efficient algorithms.

These courses are required to be completed prior to starting this course:

  • Problem-Solving with Machine Learning
  • Estimating Probability Distributions
  • Learning with Linear Classifiers
  • Decision Trees and Model Selection

In this course, you will explore support-vector machines and use them to find a maximum margin classifier. You will then construct a mental model for how loss functions and regularizers are used to minimize risk and improve generalization of a learning model. Through the use of feature expansion, you will extend the capabilities of linear classifiers to find non-linear classification boundaries. Finally, you will employ kernel machines to train algorithms that can learn in infinite dimensional feature spaces.

These courses are required to be completed prior to starting this course:

  • Problem-Solving with Machine Learning
  • Estimating Probability Distributions
  • Learning with Linear Classifiers
  • Decision Trees and Model Selection
  • Debugging and Improving Machine Learning Models

In this course, you will investigate the fundamental components of machine learning that are used to build a neural network. You will then construct a neural network and train it on a simple data set to make predictions on new data. We then look at how a neural network can be adapted for image data by exploring convolutional networks. You will have the opportunity to explore a simple implementation of a convolutional neural network written in PyTorch, a deep learning platform. Finally, you will yet again adapt neural networks, this time for sequential data. Using a deep averaging network, you will implement a neural sequence model that analyzes product reviews to determine consumer sentiment.

These courses are required to be completed prior to starting this course:

  • Problem-Solving with Machine Learning
  • Estimating Probability Distributions
  • Learning with Linear Classifiers
  • Decision Trees and Model Selection
  • Debugging and Improving Machine Learning Models
  • Learning with Kernel Machines
Learn from Cornell's Top Minds
All certificates are personally developed by Cornell faculty.

Get It Done 100% Online
Flexible, interactive programs
that fit your life and career.

Power Your
Career
Cornell’s standard of excellence can help you stand apart.
Request Info Now
Make Cornell part of your story today.