Coding Brewery

  • Blog
  • Software development
    • Clean Code
    • C++
    • Python
    • Java
    • Devops
      • Docker
      • Linux
  • Machine learning
    • Maths
      • Median absolute deviation (MAD) of Errors
      • R-Squared/Coefficient of determination
      • Distribution of error functions
      • Logarithmic loss (or cross-entropy)
      • Receiver operating characteristic (ROC ) curve
      • Confusion Matrix
      • Maths 101: Part 8: Hypothesis testing
      • Distribution of error functions
      • Maths 101 : Part 7: Estimating Confidence Intervals
    • Data Science Libraries
    • Dimensionality Reduction
  • Career Advice
  • Home
    • Contact Us
    • About

Blog

Dimensionality Reduction

Dimensionality Reduction In Machine Learning: Some mathematical prerequisites: Mean Vector, Covariance Matrix and Column Standardization

This is part 2 of Introduction to Dimensionality Reduction. In this blog post, we would several different mathematical prerequisites that one must know before trying to understand machine learning. Mean Vector The sample mean is a vector each of whose Read more

Like this:

Like Loading...
By Deepanshu Lulla, 7 yearsJune 1, 2019 ago
Clean Code

Clean Code Concepts: Be SOLID: Open Closed Principle

In this series, we would focus on some of the language-agnostic parts which can be used to improve your ability to write cleaner code in any language. So let’s dive right into the SOLID concepts of object-oriented design. S.O.L.I.D is an Read more

Like this:

Like Loading...
By admin, 7 yearsMay 11, 2019 ago
Dimensionality Reduction

Dimensionality Reduction: Part 1: Introduction and defining data as data frame

Introduction to Dimensionality Reduction There are several ways we can define Dimensionality Reduction. One way to define it is: Dimensionality Reduction refers to the process of converting a set of data having vast dimensions into data with lesser dimensions ensuring Read more

Like this:

Like Loading...
By Deepanshu Lulla, 7 yearsMay 11, 2019 ago
Data Preprocessing

Data Preprocessing and Cleaning: Part 1: Column Normalization

Before applying any dimensionality reduction technique sometimes it is important to preprocess the data. There are several ways which we can use for preprocessing data. In this post, we will explore one of the common ways to do data preprocessing Read more

Like this:

Like Loading...
By Deepanshu Lulla, 7 yearsApril 20, 2019 ago
Clean Code

Clean Code Concepts: Be SOLID: Single Responsibility Principle

Writing clean code is more of an art rather than a science. So What really makes code cleaner?. In this series called Clean Code Concepts, we investigate some of the ways to write code in a clean way. There are Read more

Like this:

Like Loading...
By Deepanshu Lulla, 7 yearsApril 6, 2019 ago
Machine learning

Maths 101 : Part 7: Estimating Confidence Intervals

In statistics, a confidence interval (CI) is a type of interval estimate which we compute using the statistics of the observed data. The interval has an associated confidence level that, loosely speaking, quantifies the level of confidence that the value Read more

Like this:

Like Loading...
By admin, 7 yearsMarch 23, 2019 ago
Machine learning

Maths 101 : Part 6: Measuring relationship between two Random Variables

Suppose you have taken the data for heights and weights of students in class and you want to figure out the correlation between heights and weights of students. The relation between these two parameters is defined mathematically by one of Read more

Like this:

Like Loading...
By admin, 7 yearsMarch 11, 2019 ago
Machine learning

Maths 101: Part 5: Different Types of Distribution

Types of Distributions Bernoulli and Binomial distribution A Bernoulli random variable has two possible outcomes: 0 or 1. A binomial distribution is the sum of independent and identically distributed Bernoulli random variables. So, for example, say I have a coin, Read more

Like this:

Like Loading...
By admin, 7 yearsFebruary 23, 2019 ago
Machine learning

Maths 101: Part 4: PDF, Central Limit Theorem and Chebyshev’s inequality

Populations and Samples The main difference between a population and a sample has to do with how observations are assigned to the data set Population Includes all of the elements from a set of data. Sample Consists of one or Read more

Like this:

Like Loading...
By admin, 7 yearsFebruary 9, 2019 ago
Machine learning

Maths 101: Part 3: Random variables and Normal Distribution

Random Variables The term random variable is not very descriptive. A better term is measurement function.Consider tossing a fair six-sided die. There are only six outcomes possible, Ω = {1, 2, 3, 4, 5, 6} As we know, if the Read more

Like this:

Like Loading...
By admin, 7 yearsJanuary 26, 2019 ago

Posts pagination

Previous 1 … 4 5 6 7 Next
Hestia | Developed by ThemeIsle
%d