Project Icon

datascience

数据科学学习路线图 从基础到高级的系统指南

这是一个系统的数据科学学习路线图项目,涵盖了从基础数学到高级统计分析的关键知识点。内容包括矩阵代数、哈希函数、关系代数等基础,以及数据库操作、ETL、NoSQL等实用技能,还有数据可视化和探索性分析等统计学内容。该项目为数据科学学习者提供了一个全面且结构化的学习框架。

Give a 🌟 if it's useful and share with other Data Science Enthusiasts.

Data-Scientist-Roadmap (2021)

roadmap-picture


1_ Fundamentals

1_ Matrices & Algebra fundamentals

About

In mathematics, a matrix is a rectangular array of numbers, symbols, or expressions, arranged in rows and columns. A matrix could be reduced as a submatrix of a matrix by deleting any collection of rows and/or columns.

matrix-image

Operations

There are a number of basic operations that can be applied to modify matrices:

2_ Hash function, binary tree, O(n)

Hash function

Definition

A hash function is any function that can be used to map data of arbitrary size to data of fixed size. One use is a data structure called a hash table, widely used in computer software for rapid data lookup. Hash functions accelerate table or database lookup by detecting duplicated records in a large file.

hash-image

Binary tree

Definition

In computer science, a binary tree is a tree data structure in which each node has at most two children, which are referred to as the left child and the right child.

binary-tree-image

O(n)

Definition

In computer science, big O notation is used to classify algorithms according to how their running time or space requirements grow as the input size grows. In analytic number theory, big O notation is often used to express a bound on the difference between an arithmetical function and a better understood approximation.

3_ Relational algebra, DB basics

Definition

Relational algebra is a family of algebras with a well-founded semantics used for modelling the data stored in relational databases, and defining queries on it.

The main application of relational algebra is providing a theoretical foundation for relational databases, particularly query languages for such databases, chief among which is SQL.

Natural join

About

In SQL language, a natural junction between two tables will be done if :

  • At least one column has the same name in both tables
  • Theses two columns have the same data type
    • CHAR (character)
    • INT (integer)
    • FLOAT (floating point numeric data)
    • VARCHAR (long character chain)

mySQL request

    SELECT <COLUMNS>
    FROM <TABLE_1>
    NATURAL JOIN <TABLE_2>

    SELECT <COLUMNS>
    FROM <TABLE_1>, <TABLE_2>
    WHERE TABLE_1.ID = TABLE_2.ID

4_ Inner, Outer, Cross, theta-join

Inner join

The INNER JOIN keyword selects records that have matching values in both tables.

Request

  SELECT column_name(s)
  FROM table1
  INNER JOIN table2 ON table1.column_name = table2.column_name;

inner-join-image

Outer join

The FULL OUTER JOIN keyword return all records when there is a match in either left (table1) or right (table2) table records.

Request

  SELECT column_name(s)
  FROM table1
  FULL OUTER JOIN table2 ON table1.column_name = table2.column_name; 

outer-join-image

Left join

The LEFT JOIN keyword returns all records from the left table (table1), and the matched records from the right table (table2). The result is NULL from the right side, if there is no match.

Request

  SELECT column_name(s)
  FROM table1
  LEFT JOIN table2 ON table1.column_name = table2.column_name;

left-join-image

Right join

The RIGHT JOIN keyword returns all records from the right table (table2), and the matched records from the left table (table1). The result is NULL from the left side, when there is no match.

Request

  SELECT column_name(s)
  FROM table1
  RIGHT JOIN table2 ON table1.column_name = table2.column_name;

left-join-image

5_ CAP theorem

It is impossible for a distributed data store to simultaneously provide more than two out of the following three guarantees:

  • Every read receives the most recent write or an error.
  • Every request receives a (non-error) response – without guarantee that it contains the most recent write.
  • The system continues to operate despite an arbitrary number of messages being dropped (or delayed) by the network between nodes.

In other words, the CAP Theorem states that in the presence of a network partition, one has to choose between consistency and availability. Note that consistency as defined in the CAP Theorem is quite different from the consistency guaranteed in ACID database transactions.

6_ Tabular data

Tabular data are opposed to relational data, like SQL database.

In tabular data, everything is arranged in columns and rows. Every row have the same number of column (except for missing value, which could be substituted by "N/A".

The first line of tabular data is most of the time a header, describing the content of each column.

The most used format of tabular data in data science is CSV_. Every column is surrounded by a character (a tabulation, a coma ..), delimiting this column from its two neighbours.

7_ Entropy

Entropy is a measure of uncertainty. High entropy means the data has high variance and thus contains a lot of information and/or noise.

For instance, a constant function where f(x) = 4 for all x has no entropy and is easily predictable, has little information, has no noise and can be succinctly represented . Similarly, f(x) = ~4 has some entropy while f(x) = random number is very high entropy due to noise.

8_ Data frames & series

A data frame is used for storing data tables. It is a list of vectors of equal length.

A series is a series of data points ordered.

9_ Sharding

Sharding is horizontal(row wise) database partitioning as opposed to vertical(column wise) partitioning which is Normalization

Why use Sharding?

  1. Database systems with large data sets or high throughput applications can challenge the capacity of a single server.

  2. Two methods to address the growth : Vertical Scaling and Horizontal Scaling

  3. Vertical Scaling

    • Involves increasing the capacity of a single server
    • But due to technological and economical restrictions, a single machine may not be sufficient for the given workload.
  4. Horizontal Scaling

    • Involves dividing the dataset and load over multiple servers, adding additional servers to increase capacity as required
    • While the overall speed or capacity of a single machine may not be high, each machine handles a subset of the overall workload, potentially providing better efficiency than a single high-speed high-capacity server.
    • Idea is to use concepts of Distributed systems to achieve scale
    • But it comes with same tradeoffs of increased complexity that comes hand in hand with distributed systems.
    • Many Database systems provide Horizontal scaling via Sharding the datasets.

10_ OLAP

Online analytical processing, or OLAP, is an approach to answering multi-dimensional analytical (MDA) queries swiftly in computing.

OLAP is part of the broader category of business intelligence, which also encompasses relational database, report writing and data mining. Typical applications of OLAP include _business reporting for sales, marketing, management reporting, business process management (BPM), budgeting and forecasting, financial reporting and similar areas, with new applications coming up, such as agriculture.

The term OLAP was created as a slight modification of the traditional database term online transaction processing (OLTP).

11_ Multidimensional Data model

12_ ETL

  • Extract

    • extracting the data from the multiple heterogenous source system(s)
    • data validation to confirm whether the data pulled has the correct/expected values in a given domain
  • Transform

    • extracted data is fed into a pipeline which applies multiple functions on top of data
    • these functions intend to convert the data into the format which is accepted by the end system
    • involves cleaning the data to remove noise, anamolies and redudant data
  • Load

    • loads the transformed data into the end target

13_ Reporting vs BI vs Analytics

14_ JSON and XML

JSON

JSON is a language-independent data format. Example describing a person:

{
  "firstName": "John",
  "lastName": "Smith",
  "isAlive": true,
  "age": 25,
  "address": {
    "streetAddress": "21 2nd Street",
    "city": "New York",
    "state": "NY",
    "postalCode": "10021-3100"
  },
  "phoneNumbers": [
    {
      "type": "home",
      "number": "212 555-1234"
    },
    {
      "type": "office",
      "number": "646 555-4567"
    },
    {
      "type": "mobile",
      "number": "123 456-7890"
    }
  ],
  "children": [],
  "spouse": null
}

XML

Extensible Markup Language (XML) is a markup language that defines a set of rules for encoding documents in a format that is both human-readable and machine-readable.

<CATALOG>
  <PLANT>
    <COMMON>Bloodroot</COMMON>
    <BOTANICAL>Sanguinaria canadensis</BOTANICAL>
    <ZONE>4</ZONE>
    <LIGHT>Mostly Shady</LIGHT>
    <PRICE>$2.44</PRICE>
    <AVAILABILITY>031599</AVAILABILITY>
  </PLANT>
  <PLANT>
    <COMMON>Columbine</COMMON>
    <BOTANICAL>Aquilegia canadensis</BOTANICAL>
    <ZONE>3</ZONE>
    <LIGHT>Mostly Shady</LIGHT>
    <PRICE>$9.37</PRICE>
    <AVAILABILITY>030699</AVAILABILITY>
  </PLANT>
  <PLANT>
    <COMMON>Marsh Marigold</COMMON>
    <BOTANICAL>Caltha palustris</BOTANICAL>
    <ZONE>4</ZONE>
    <LIGHT>Mostly Sunny</LIGHT>
    <PRICE>$6.81</PRICE>
    <AVAILABILITY>051799</AVAILABILITY>
  </PLANT>
</CATALOG>

15_ NoSQL

noSQL is oppsed to relationnal databases (stand for __N__ot __O__nly SQL). Data are not structured and there's no notion of keys between tables.

Any kind of data can be stored in a noSQL database (JSON, CSV, ...) whithout thinking about a complex relationnal scheme.

Commonly used noSQL stacks: Cassandra, MongoDB, Redis, Oracle noSQL ...

16_ Regex

About

Reg ular ex pressions (regex) are commonly used in informatics.

It can be used in a wide range of possibilities :

  • Text replacing
  • Extract information in a text (email, phone number, etc)
  • List files with the .txt extension ..

http://regexr.com/ is a good website for experimenting on Regex.

Utilisation

To use them in Python, just import:

import re

17_ Vendor landscape

18_ Env Setup

2_ Statistics

Statistics-101 for data noobs

1_ Pick a dataset

Datasets repositories

Generalists

Medical

Other languages

French

2_ Descriptive statistics

Mean

In probability and statistics, population mean and expected value are used synonymously to refer to one measure of the central tendency either of a probability distribution or of the random variable characterized by that distribution.

For a data set, the terms arithmetic mean, mathematical expectation, and sometimes average are used synonymously to refer to a central value of a discrete set of numbers: specifically, the sum of the values divided by the number of values.

mean_formula

Median

The median is the value separating the higher half of a data sample, a population, or a probability distribution, from the lower half. In simple terms, it may be thought of as the "middle" value of a data set.

Descriptive statistics in Python

Numpy is a python library widely used for statistical analysis.

Installation

pip3 install numpy

Utilization

import numpy

3_ Exploratory data analysis

The step includes visualization and analysis of data.

Raw data may possess improper distributions of data which may lead to issues moving forward.

Again, during applications we must also know the distribution of data, for instance, the fact whether the data is linear or spirally distributed.

Guide to EDA in Python

Libraries in Python

Matplotlib

Library used to plot graphs in Python

Installation:

pip3 install matplotlib

Utilization:

import matplotlib.pyplot as plt

Pandas

Library used to large datasets in python

Installation:

pip3 install pandas

Utilization:

import pandas as pd

Seaborn

Yet another Graph Plotting Library in Python.

Installation:

pip3 install seaborn

Utilization:

import seaborn as sns

PCA

PCA stands for principle component analysis.

We often require to shape of the data distribution as we have seen previously. We need to plot the data for the same.

Data can be Multidimensional, that is, a dataset can have multiple features.

We can plot only two dimensional data, so, for multidimensional data, we project the multidimensional distribution in two dimensions, preserving the principle components of the distribution, in order to get an idea of the actual distribution through the 2D plot.

It is used for dimensionality reduction also. Often it is seen that several features do not significantly contribute any important insight to the data distribution. Such features creates complexity and increase dimensionality of the data. Such features are not considered which results in decrease of the dimensionality of the data.

Mathematical Explanation

Application in Python

4_ Histograms

Histograms are representation of distribution of numerical data. The procedure consists of binnng the numeric values using range divisions i.e, the entire range in which the data varies is split into several fixed intervals. Count or frequency of occurences of the numbers in the range of the bins are represented.

Histograms

plot

In python, Pandas,Matplotlib,Seaborn can be used to create Histograms.

5_ Percentiles & outliers

Percentiles

Percentiles are numberical measures in statistics, which represents how much or what percentage of data falls below a given number or instance in a numerical data distribution.

For instance, if we say 70 percentile, it represents, 70% of the data in the ditribution are below the given numerical value.

Percentiles

Outliers

Outliers are data points(numerical) which have significant differences with other data points. They differ from majority of points in the distribution. Such points may cause the central measures of distribution, like mean, and median. So, they need to be detected and removed.

Outliers

Box Plots can be used detect Outliers in the data. They can be created using Seaborn library

Image_Box_Plot

6_ Probability theory

Probability is the likelihood of an event in a Random experiment. For instance, if a coin is tossed, the chance of getting a head is 50% so, probability is 0.5.

Sample Space: It is the set of all possible outcomes of a Random Experiment. Favourable Outcomes: The set of outcomes we are looking for in a Random Experiment

Probability = (Number of Favourable Outcomes) / (Sample Space)

Probability theory is a branch of mathematics that is associated with the concept of probability.

[Basics of

项目侧边栏1项目侧边栏2
推荐项目
Project Cover

豆包MarsCode

豆包 MarsCode 是一款革命性的编程助手,通过AI技术提供代码补全、单测生成、代码解释和智能问答等功能,支持100+编程语言,与主流编辑器无缝集成,显著提升开发效率和代码质量。

Project Cover

AI写歌

Suno AI是一个革命性的AI音乐创作平台,能在短短30秒内帮助用户创作出一首完整的歌曲。无论是寻找创作灵感还是需要快速制作音乐,Suno AI都是音乐爱好者和专业人士的理想选择。

Project Cover

有言AI

有言平台提供一站式AIGC视频创作解决方案,通过智能技术简化视频制作流程。无论是企业宣传还是个人分享,有言都能帮助用户快速、轻松地制作出专业级别的视频内容。

Project Cover

Kimi

Kimi AI助手提供多语言对话支持,能够阅读和理解用户上传的文件内容,解析网页信息,并结合搜索结果为用户提供详尽的答案。无论是日常咨询还是专业问题,Kimi都能以友好、专业的方式提供帮助。

Project Cover

阿里绘蛙

绘蛙是阿里巴巴集团推出的革命性AI电商营销平台。利用尖端人工智能技术,为商家提供一键生成商品图和营销文案的服务,显著提升内容创作效率和营销效果。适用于淘宝、天猫等电商平台,让商品第一时间被种草。

Project Cover

吐司

探索Tensor.Art平台的独特AI模型,免费访问各种图像生成与AI训练工具,从Stable Diffusion等基础模型开始,轻松实现创新图像生成。体验前沿的AI技术,推动个人和企业的创新发展。

Project Cover

SubCat字幕猫

SubCat字幕猫APP是一款创新的视频播放器,它将改变您观看视频的方式!SubCat结合了先进的人工智能技术,为您提供即时视频字幕翻译,无论是本地视频还是网络流媒体,让您轻松享受各种语言的内容。

Project Cover

美间AI

美间AI创意设计平台,利用前沿AI技术,为设计师和营销人员提供一站式设计解决方案。从智能海报到3D效果图,再到文案生成,美间让创意设计更简单、更高效。

Project Cover

稿定AI

稿定设计 是一个多功能的在线设计和创意平台,提供广泛的设计工具和资源,以满足不同用户的需求。从专业的图形设计师到普通用户,无论是进行图片处理、智能抠图、H5页面制作还是视频剪辑,稿定设计都能提供简单、高效的解决方案。该平台以其用户友好的界面和强大的功能集合,帮助用户轻松实现创意设计。

投诉举报邮箱: service@vectorlightyear.com
@2024 懂AI·鲁ICP备2024100362号-6·鲁公网安备37021002001498号