Supervised Machine Learning: Regression and Classification Course Syllabus
Full curriculum breakdown — modules, lessons, estimated time, and outcomes.
Overview (80-120 words) describing structure and time commitment.
Module 1: Introduction & Linear Regression with One Variable
Estimated time: 3 hours
- Course logistics and learning objectives
- Data representations and feature vectors
- Linear regression algorithm and cost function
- Gradient descent intuition and implementation
Module 2: Linear Regression with Multiple Variables
Estimated time: 4 hours
- Multivariate linear regression model
- Feature normalization and scaling
- Normal equation vs. gradient descent
- Polynomial regression for nonlinear patterns
Module 3: Logistic Regression & Regularization
Estimated time: 4 hours
- Classification with logistic regression
- Sigmoid function and decision boundaries
- Cost function adaptation for classification
- Regularization to prevent overfitting
Module 4: Neural Networks: Representation
Estimated time: 3 hours
- Biological and artificial neurons
- Network architectures and layers
- Forward propagation mechanism
- Activation functions (sigmoid, tanh, ReLU)
Module 5: Neural Networks: Learning
Estimated time: 4 hours
- Backpropagation algorithm
- Random initialization of weights
- Gradient checking for debugging
- Hyperparameter tuning basics
Module 6: Advice for Applying Machine Learning & Support Vector Machines
Estimated time: 5 hours
- Train/validation/test splits
- Bias–variance trade-off analysis
- Error analysis and model improvement
- Support vector machines and Gaussian kernels
Prerequisites
- Basic knowledge of linear algebra
- Familiarity with Octave/MATLAB programming
- High school-level calculus and probability
What You'll Be Able to Do After
- Implement linear and logistic regression models from scratch
- Apply regularization techniques to improve model generalization
- Train and debug neural networks for classification tasks
- Select appropriate ML algorithms based on problem type
- Evaluate models using cross-validation and error analysis