Video description
27+ Hours of Video Instruction
An outstanding data scientist or machine learning engineer must master more than the basics of using ML algorithms with the most popular libraries, such as scikit-learn and Keras. To train innovative models or deploy them to run performantly in production, an in-depth appreciation of machine learning theory is essential, which includes a working understanding of the foundational subjects of linear algebra, calculus, probability, statistics, data structures, and algorithms.
When the foundations of machine learning are firm, it becomes easier to make the jump from general ML principles to specialized ML domains, such as deep learning, natural language processing, machine vision, and reinforcement learning. The more specialized the application, the more likely its implementation details are available only in academic papers or graduate-level textbooks, either of which assume an understanding of the foundational subjects.
This master class includes the following courses:
- Linear Algebra for Machine Learning
- Calculus for Machine Learning LiveLessons
- Probability and Statistics for Machine Learning
- Data Structures, Algorithms, and Machine Learning Optimization
Linear Algebra for Machine Learning LiveLessons provides you with an understanding of the theory and practice of linear algebra, with a focus on machine learning applications.
Calculus for Machine Learning LiveLessons introduces the mathematical field of calculus—the study of rates of change—from the ground up. It is essential because computing derivatives via differentiation is the basis of optimizing most machine learning algorithms, including those used in deep learning, such as backpropagation and stochastic gradient descent.
Probability and Statistics for Machine Learning (Machine Learning Foundations) LiveLessons provides you with a functional, hands-on understanding of probability theory and statistical modeling, with a focus on machine learning applications.
Data Structures, Algorithms, and Machine Learning Optimization LiveLessons provides you with a functional, hands-on understanding of the essential computer science for machine learning applications.
About the InstructorJon Krohn is Chief Data Scientist at the machine learning company Nebula. He authored the book Deep Learning Illustrated, an instant #1 bestseller that was translated into seven languages. He is also the host of SuperDataScience, the industry‚Äôs most listened-to podcast. Jon is renowned for his compelling lectures, which he offers at Columbia University, New York University, leading industry conferences, via O’Reilly, and via his award-winning YouTube channel. He holds a PhD from Oxford and has been publishing on machine learning in prominent academic journals since 2010; his papers have been cited more than a thousand times.
Course Requirements
- Mathematics: Familiarity with secondary school-level mathematics will make the course easier to follow. If you are comfortable dealing with quantitative information—such as understanding charts and rearranging simple equations—then you should be well-prepared to follow along with all of the mathematics.
- Programming: All code demos are in Python so experience with it or another object-oriented programming language would be helpful for following along with the hands-on examples.
About Pearson Video Training
Pearson publishes expert-led video tutorials covering a wide selection of technology topics designed to teach you the skills you need to succeed. These professional and personal technology videos feature world-leading author instructors published by your trusted technology brands: Addison-Wesley, Cisco Press, Pearson IT Certification, Sams, and Que. Topics include: IT Certification, Network Security, Cisco Technology, Programming, Web Development, Mobile Development, and more. Learn more about Pearson Video training at http://www.informit.com/video
.
Table of Contents
Linear Algebra for Machine Learning (Machine Learning Foundations): Introduction
Introduction
Lesson 1: Orientation to Linear Algebra
Topics
1.1 Defining Linear Algebra
1.2 Solving a System of Equations Algebraically
1.3 Linear Algebra in Machine Learning and Deep Learning
1.4 Historical and Contemporary Applications
1.5 Exercise
Lesson 2: Data Structures for Algebra
Topics
2.1 Tensors
2.2 Scalars
2.3 Vectors and Vector Transposition
2.4 Norms and Unit Vectors
2.5 Basis, Orthogonal, and Orthonormal Vectors
2.6 Matrices
2.7 Generic Tensor Notation
2.8 Exercises
Lesson 3: Common Tensor Operations
Topics
3.1 Tensor Transposition
3.2 Basic Tensor Arithmetic
3.3 Reduction
3.4 The Dot Product
3.5 Exercises
Lesson 4: Solving Linear Systems
Topics
4.1 The Substitution Strategy
4.2 Substitution Exercises
4.3 The Elimination Strategy
4.4 Elimination Exercises
Lesson 5: Matrix Multiplication
Topics
5.1 Matrix-by-Vector Multiplication
5.2 Matrix-by-Matrix Multiplication
5.3 Symmetric and Identity Matrices
5.4 Exercises
5.5 Machine Learning and Deep Learning Applications
Lesson 6: Special Matrices and Matrix Operations
Topics
6.1 The Frobenius Norm
6.2 Matrix Inversion
6.3 Diagonal Matrices
6.4 Orthogonal Matrices
6.5 The Trace Operator
Lesson 7: Eigenvectors and Eigenvalues
Topics
7.1 The Eigenconcept
7.2 Exercises
7.3 Eigenvectors in Python
7.4 High-Dimensional Eigenvectors
Lesson 8: Matrix Determinants and Decomposition
Topics
8.1 The Determinant of a 2 x 2 Matrix
8.2 The Determinants of Larger Matrices
8.3 Exercises
8.4 Determinants and Eigenvalues
8.5 Eigendecomposition
Lesson 9: Machine Learning with Linear Algebra
Topics
9.1 Singular Value Decomposition
9.2 Media File Compression
9.3 The Moore-Penrose Pseudoinverse
9.4 Regression via Pseudoinversion
9.5 Principal Component Analysis
9.6 Resources for Further Study of Linear Algebra
Summary
Linear Algebra for Machine Learning (Machine Learning Foundations): Summary
Calculus for Machine Learning: Introduction
Introduction
Lesson 1: Orientation to Calculus
Topics
1.1 Differential versus Integral Calculus
1.2 A Brief History
1.3 Calculus of the Infinitesimals
1.4 Modern Applications
Lesson 2: Limits
Topics
2.1 Continuous versus Discontinuous Functions
2.2 Solving via Factoring
2.3 Solving via Approaching
2.4 Approaching Infinity
2.5 Exercises
Lesson 3: Differentiation
Topics
3.1 Delta Method
3.2 The Most Common Representation
3.3 Derivative Notation
3.4 Constants
3.5 Power Rule
3.6 Constant Product Rule
3.7 Sum Rule
3.8 Exercises
Lesson 4: Advanced Differentiation Rules
Topics
4.1 Product Rule
4.2 Quotient Rule
4.3 Chain Rule
4.4 Exercises
4.5 Power Rule on a Function Chain
Lesson 5: Automatic Differentiation
Topics
5.1 Introduction
5.2 Autodiff with PyTorch
5.3 Autodiff with TensorFlow
5.4 Directed Acyclic Graph of a Line Equation
5.5 Fitting a Line with Machine Learning
Lesson 6: Partial Derivatives
Topics
6.1 Derivatives of Multivariate Functions
6.2 Partial Derivative Exercises
6.3 Geometrical Examples
6.4 Geometrical Exercises
6.5 Notation
6.6 Chain Rule
6.7 Chain Rule Exercises
Lesson 7: Gradients
Topics
7.1 Single-Point Regression
7.2 Partial Derivatives of Quadratic Cost
7.3 Descending the Gradient of Cost
7.4 Gradient of Mean Squared Error
7.5 Backpropagation
7.6 Higher-Order Partial Derivatives
7.7 Exercise
Lesson 8: Integrals
Topics
8.1 Binary Classification
8.2 The Confusion Matrix and ROC Curve
8.3 Indefinite Integrals
8.4 Definite Integrals
8.5 Numeric Integration with Python
8.6 Exercises
8.7 Finding the Area Under the ROC Curve
8.8 Resources for Further Study of Calculus
Summary
Calculus for Machine Learning: Summary
Probability and Statistics for Machine Learning: Introduction
Introduction
Lesson 1: Introduction to Probability
Topics
1.1 Orientation to the Machine Learning Foundations Series
1.2 What Probability Theory Is
1.3 Events and Sample Spaces
1.4 Multiple Observations
1.5 Factorials and Combinatorics
1.6 Exercises
1.7 The Law of Large Numbers and the Gambler’s Fallacy
1.8 Probability Distributions in Statistics
1.9 Bayesian versus Frequentist Statistics
1.10 Applications of Probability to Machine Learning
Lesson 2: Random Variables
Topics
2.1 Discrete and Continuous Variables
2.2 Probability Mass Functions
2.3 Probability Density Functions
2.4 Exercises on Probability Functions
2.5 Expected Value
2.6 Exercises on Expected Value
Lesson 3: Describing Distributions
Topics
3.1 The Mean, a Measure of Central Tendency
3.2 Medians
3.3 Modes
3.4 Quantiles: Percentiles, Quartiles, and Deciles
3.5 Box-and-Whisker Plots
3.6 Variance, a Measure of Dispersion
3.7 Standard Deviation
3.8 Standard Error
3.9 Covariance, a Measure of Relatedness
3.10. Correlation
Lesson 4: Relationships Between Probabilities
Topics
4.1 Joint Probability Distribution
4.2 Marginal Probability
4.3 Conditional Probability
4.4 Exercises
4.5 Chain Rule of Probabilities
4.6 Independent Random Variables
4.7 Conditional Independence
Lesson 5: Distributions in Machine Learning
Topics
5.1 Uniform
5.2 Gaussian: Normal and Standard Normal
5.3 The Central Limit Theorem
5.4 Log-Normal
5.5 Exponential and Laplace
5.6 Binomial and Multinomial
5.7 Poisson
5.8 Mixture Distributions
5.9 Preprocessing Data for Model Input
5.10 Exercises
Lesson 6: Information Theory
Topics
6.1 What Information Theory Is
6.2 Self-Information, Nats, and Bits
6.3 Shannon and Differential Entropy
6.4 Kullback-Leibler Divergence and Cross-Entropy
Lesson 7: Introduction to Statistics
Topics
7.1 Applications of Statistics to Machine Learning
7.2 Review of Essential Probability Theory
7.3 z-scores and Outliers
7.4 Exercises on z-scores
7.5 p-values
7.6 Exercises on p-values
Lesson 8: Comparing Means
Topics
8.1 Single-Sample t-tests and Degrees of Freedom
8.2 Independent t-tests
8.3 Paired t-tests
8.4 Applications to Machine Learning
8.5 Exercises
8.6 Confidence Intervals
8.7 ANOVA: Analysis of Variance
Lesson 9: Correlation
Topics
9.1 The Pearson Correlation Coefficient
9.2 R-squared Coefficient of Determination
9.3 Correlation versus Causation
9.4 Correcting for Multiple Comparisons
Lesson 10: Regression
Topics
10.1 Independent versus Dependent Variables
10.2 Linear Regression to Predict Continuous Values
10.3 Fitting a Line to Points on a Cartesian Plane
10.4 Linear Least Squares Exercise
10.5 Ordinary Least Squares
10.6 Categorical “Dummy” Features
10.7 Logistic Regression to Predict Categories
10.8 Open-Ended Exercises
Lesson 11: Bayesian Statistics
Topics
11.1 Machine Learning versus Frequentist Statistics
11.2 When to Use Bayesian Statistics
11.3 Prior Probabilities
11.4 Bayes’ Theorem
11.5 Resources for Further Study of Probability and Statistics
Summary
Probability and Statistics for Machine Learning: Summary
Data Structures, Algorithms, and Machine Learning Optimization: Introduction
Introduction
Lesson 1: Orientation to Data Structures and Algorithms
Topics
1.1 Orientation to the Machine Learning Foundations Series
1.2 A Brief History of Data
1.3 A Brief History of Algorithms
1.4 Applications to Machine Learning
Lesson 2: “Big O” Notation
Topics
2.1 Introduction
2.2 Constant Time
2.3 Linear Time
2.4 Polynomial Time
2.5 Common Runtimes
2.6 Best versus Worst Case
Lesson 3: List-Based Data Structures
Topics
3.1 Lists
3.2 Arrays
3.3 Linked Lists
3.4 Doubly-Linked Lists
3.5 Stacks
3.6 Queues
3.7 Deques
Lesson 4: Searching and Sorting
Topics
4.1 Binary Search
4.2 Bubble Sort
4.3 Merge Sort
4.4 Quick Sort
Lesson 5: Sets and Hashing
Topics
5.1 Maps and Dictionaries
5.2 Sets
5.3 Hash Functions
5.4 Collisions
5.5 Load Factor
5.6 Hash Maps
5.7 String Keys
5.8 Hashing in ML
Lesson 6: Trees
Topics
6.1 Introduction
6.2 Decision Trees
6.3 Random Forests
6.4 XGBoost: Gradient-Boosted Trees
6.5 Additional Concepts
Lesson 7: Graphs
Topics
7.1 Introduction
7.2 Directed versus Undirected Graphs
7.3 DAGs: Directed Acyclic Graphs
7.4 Additional Concepts
7.5 Bonus: Pandas DataFrames
7.6 Resources for Further Study of DSA
Lesson 8: Machine Learning Optimization
Topics
8.1 Statistics versus Machine Learning
8.2 Objective Functions
8.3 Mean Absolute Error
8.4 Mean Squared Error
8.5 Minimizing Cost with Gradient Descent
8.6 Gradient Descent from Scratch with PyTorch
8.7 Critical Points
8.8 Stochastic Gradient Descent
8.9 Learning Rate Scheduling
8.10 Maximizing Reward with Gradient Ascent
Lesson 9: Fancy Deep Learning Optimizers
Topics
9.1 Jacobian Matrices
9.2 Second-Order Optimization and Hessians
9.3 Momentum
9.4 Adaptive Optimizers
9.5 Congratulations and Next Steps
Summary
Data Structures, Algorithms, and Machine Learning Optimization: Summary