How to make a decision tree in python


1.10. Decision Trees — scikit-learn 1.1.2 documentation

Decision Trees (DTs) are a non-parametric supervised learning method used for classification and regression. The goal is to create a model that predicts the value of a target variable by learning simple decision rules inferred from the data features. A tree can be seen as a piecewise constant approximation.

For instance, in the example below, decision trees learn from data to approximate a sine curve with a set of if-then-else decision rules. The deeper the tree, the more complex the decision rules and the fitter the model.

Some advantages of decision trees are:

  • Simple to understand and to interpret. Trees can be visualized.

  • Requires little data preparation. Other techniques often require data normalization, dummy variables need to be created and blank values to be removed. Note however that this module does not support missing values.

  • The cost of using the tree (i. e., predicting data) is logarithmic in the number of data points used to train the tree.

  • Able to handle both numerical and categorical data. However, the scikit-learn implementation does not support categorical variables for now. Other techniques are usually specialized in analyzing datasets that have only one type of variable. See algorithms for more information.

  • Able to handle multi-output problems.

  • Uses a white box model. If a given situation is observable in a model, the explanation for the condition is easily explained by boolean logic. By contrast, in a black box model (e.g., in an artificial neural network), results may be more difficult to interpret.

  • Possible to validate a model using statistical tests. That makes it possible to account for the reliability of the model.

  • Performs well even if its assumptions are somewhat violated by the true model from which the data were generated.

The disadvantages of decision trees include:

  • Decision-tree learners can create over-complex trees that do not generalize the data well. This is called overfitting. Mechanisms such as pruning, setting the minimum number of samples required at a leaf node or setting the maximum depth of the tree are necessary to avoid this problem.

  • Decision trees can be unstable because small variations in the data might result in a completely different tree being generated. This problem is mitigated by using decision trees within an ensemble.

  • Predictions of decision trees are neither smooth nor continuous, but piecewise constant approximations as seen in the above figure. Therefore, they are not good at extrapolation.

  • The problem of learning an optimal decision tree is known to be NP-complete under several aspects of optimality and even for simple concepts. Consequently, practical decision-tree learning algorithms are based on heuristic algorithms such as the greedy algorithm where locally optimal decisions are made at each node. Such algorithms cannot guarantee to return the globally optimal decision tree. This can be mitigated by training multiple trees in an ensemble learner, where the features and samples are randomly sampled with replacement.

  • There are concepts that are hard to learn because decision trees do not express them easily, such as XOR, parity or multiplexer problems.

  • Decision tree learners create biased trees if some classes dominate. It is therefore recommended to balance the dataset prior to fitting with the decision tree.

1.10.1. Classification

DecisionTreeClassifier is a class capable of performing multi-class classification on a dataset.

As with other classifiers, DecisionTreeClassifier takes as input two arrays: an array X, sparse or dense, of shape (n_samples, n_features) holding the training samples, and an array Y of integer values, shape (n_samples,), holding the class labels for the training samples:

>>> from sklearn import tree >>> X = [[0, 0], [1, 1]] >>> Y = [0, 1] >>> clf = tree. DecisionTreeClassifier() >>> clf = clf.fit(X, Y) 

After being fitted, the model can then be used to predict the class of samples:

>>> clf.predict([[2., 2.]]) array([1]) 

In case that there are multiple classes with the same and highest probability, the classifier will predict the class with the lowest index amongst those classes.

As an alternative to outputting a specific class, the probability of each class can be predicted, which is the fraction of training samples of the class in a leaf:

>>> clf.predict_proba([[2., 2.]]) array([[0., 1.]]) 

DecisionTreeClassifier is capable of both binary (where the labels are [-1, 1]) classification and multiclass (where the labels are [0, …, K-1]) classification.

Using the Iris dataset, we can construct a tree as follows:

>>> from sklearn.datasets import load_iris >>> from sklearn import tree >>> iris = load_iris() >>> X, y = iris. data, iris.target >>> clf = tree.DecisionTreeClassifier() >>> clf = clf.fit(X, y) 

Once trained, you can plot the tree with the plot_tree function:

>>> tree.plot_tree(clf) [...] 

We can also export the tree in Graphviz format using the export_graphviz exporter. If you use the conda package manager, the graphviz binaries and the python package can be installed with conda install python-graphviz.

Alternatively binaries for graphviz can be downloaded from the graphviz project homepage, and the Python wrapper installed from pypi with pip install graphviz.

Below is an example graphviz export of the above tree trained on the entire iris dataset; the results are saved in an output file iris.pdf:

>>> import graphviz >>> dot_data = tree.export_graphviz(clf, out_file=None) >>> graph = graphviz.Source(dot_data) >>> graph. render("iris") 

The export_graphviz exporter also supports a variety of aesthetic options, including coloring nodes by their class (or value for regression) and using explicit variable and class names if desired. Jupyter notebooks also render these plots inline automatically:

>>> dot_data = tree.export_graphviz(clf, out_file=None, ... feature_names=iris.feature_names, ... class_names=iris.target_names, ... filled=True, rounded=True, ... special_characters=True) >>> graph = graphviz.Source(dot_data) >>> graph 

Alternatively, the tree can also be exported in textual format with the function export_text. This method doesn’t require the installation of external libraries and is more compact:

>>> from sklearn.datasets import load_iris >>> from sklearn.tree import DecisionTreeClassifier >>> from sklearn. tree import export_text >>> iris = load_iris() >>> decision_tree = DecisionTreeClassifier(random_state=0, max_depth=2) >>> decision_tree = decision_tree.fit(iris.data, iris.target) >>> r = export_text(decision_tree, feature_names=iris['feature_names']) >>> print(r) |--- petal width (cm) <= 0.80 | |--- class: 0 |--- petal width (cm) > 0.80 | |--- petal width (cm) <= 1.75 | | |--- class: 1 | |--- petal width (cm) > 1.75 | | |--- class: 2 

Examples:

  • Plot the decision surface of decision trees trained on the iris dataset

  • Understanding the decision tree structure

1.10.2. Regression

Decision trees can also be applied to regression problems, using the DecisionTreeRegressor class.

As in the classification setting, the fit method will take as argument arrays X and y, only that in this case y is expected to have floating point values instead of integer values:

>>> from sklearn import tree >>> X = [[0, 0], [2, 2]] >>> y = [0. 5, 2.5] >>> clf = tree.DecisionTreeRegressor() >>> clf = clf.fit(X, y) >>> clf.predict([[1, 1]]) array([0.5]) 

Examples:

  • Decision Tree Regression

1.10.3. Multi-output problems

A multi-output problem is a supervised learning problem with several outputs to predict, that is when Y is a 2d array of shape (n_samples, n_outputs).

When there is no correlation between the outputs, a very simple way to solve this kind of problem is to build n independent models, i.e. one for each output, and then to use those models to independently predict each one of the n outputs. However, because it is likely that the output values related to the same input are themselves correlated, an often better way is to build a single model capable of predicting simultaneously all n outputs. First, it requires lower training time since only a single estimator is built. Second, the generalization accuracy of the resulting estimator may often be increased.

With regard to decision trees, this strategy can readily be used to support multi-output problems. This requires the following changes:

This module offers support for multi-output problems by implementing this strategy in both DecisionTreeClassifier and DecisionTreeRegressor. If a decision tree is fit on an output array Y of shape (n_samples, n_outputs) then the resulting estimator will:

The use of multi-output trees for regression is demonstrated in Multi-output Decision Tree Regression. In this example, the input X is a single real value and the outputs Y are the sine and cosine of X.

The use of multi-output trees for classification is demonstrated in Face completion with a multi-output estimators. In this example, the inputs X are the pixels of the upper half of faces and the outputs Y are the pixels of the lower half of those faces.

Examples:

  • Multi-output Decision Tree Regression

  • Face completion with a multi-output estimators

References:

  • M. {2}\log(n_{samples}))\).

    1.10.5. Tips on practical use

    • Decision trees tend to overfit on data with a large number of features. Getting the right ratio of samples to number of features is important, since a tree with few samples in high dimensional space is very likely to overfit.

    • Consider performing dimensionality reduction (PCA, ICA, or Feature selection) beforehand to give your tree a better chance of finding features that are discriminative.

    • Understanding the decision tree structure will help in gaining more insights about how the decision tree makes predictions, which is important for understanding the important features in the data.

    • Visualize your tree as you are training by using the export function. Use max_depth=3 as an initial tree depth to get a feel for how the tree is fitting to your data, and then increase the depth.

    • Remember that the number of samples required to populate the tree doubles for each additional level the tree grows to. Use max_depth to control the size of the tree to prevent overfitting.

    • Use min_samples_split or min_samples_leaf to ensure that multiple samples inform every decision in the tree, by controlling which splits will be considered. A very small number will usually mean the tree will overfit, whereas a large number will prevent the tree from learning the data. Try min_samples_leaf=5 as an initial value. If the sample size varies greatly, a float number can be used as percentage in these two parameters. While min_samples_split can create arbitrarily small leaves, min_samples_leaf guarantees that each leaf has a minimum size, avoiding low-variance, over-fit leaf nodes in regression problems. For classification with few classes, min_samples_leaf=1 is often the best choice.

      Note that min_samples_split considers samples directly and independent of sample_weight, if provided (e. g. a node with m weighted samples is still treated as having exactly m samples). Consider min_weight_fraction_leaf or min_impurity_decrease if accounting for sample weights is required at splits.

    • Balance your dataset before training to prevent the tree from being biased toward the classes that are dominant. Class balancing can be done by sampling an equal number of samples from each class, or preferably by normalizing the sum of the sample weights (sample_weight) for each class to the same value. Also note that weight-based pre-pruning criteria, such as min_weight_fraction_leaf, will then be less biased toward dominant classes than criteria that are not aware of the sample weights, like min_samples_leaf.

    • If the samples are weighted, it will be easier to optimize the tree structure using weight-based pre-pruning criterion such as min_weight_fraction_leaf, which ensure that leaf nodes contain at least a fraction of the overall sum of the sample weights.

    • All decision trees use np.float32 arrays internally. If training data is not in this format, a copy of the dataset will be made.

    • If the input matrix X is very sparse, it is recommended to convert to sparse csc_matrix before calling fit and sparse csr_matrix before calling predict. Training time can be orders of magnitude faster for a sparse matrix input compared to a dense matrix when features have zero values in most of the samples.

    1.10.6. Tree algorithms: ID3, C4.5, C5.0 and CART

    What are all the various decision tree algorithms and how do they differ from each other? Which one is implemented in scikit-learn?

    ID3 (Iterative Dichotomiser 3) was developed in 1986 by Ross Quinlan. The algorithm creates a multiway tree, finding for each node (i.e. in a greedy manner) the categorical feature that will yield the largest information gain for categorical targets. Trees are grown to their maximum size and then a pruning step is usually applied to improve the ability of the tree to generalize to unseen data.

    C4.5 is the successor to ID3 and removed the restriction that features must be categorical by dynamically defining a discrete attribute (based on numerical variables) that partitions the continuous attribute value into a discrete set of intervals. C4.5 converts the trained trees (i.e. the output of the ID3 algorithm) into sets of if-then rules. The accuracy of each rule is then evaluated to determine the order in which they should be applied. Pruning is done by removing a rule’s precondition if the accuracy of the rule improves without it.

    C5.0 is Quinlan’s latest version release under a proprietary license. It uses less memory and builds smaller rulesets than C4.5 while being more accurate.

    CART (Classification and Regression Trees) is very similar to C4.5, but it differs in that it supports numerical target variables (regression) and does not compute rule sets. CART constructs binary trees using the feature and threshold that yield the largest information gain at each node. *)\) until the maximum allowable depth is reached, \(n_m < \min_{samples}\) or \(n_m = 1\).

    1.10.7.1. Classification criteria

    If a target is a classification outcome taking on values 0,1,…,K-1, for node \(m\), let

    \[p_{mk} = \frac{1}{n_m} \sum_{y \in Q_m} I(y = k)\]

    be the proportion of class k observations in node \(m\). If \(m\) is a terminal node, predict_proba for this region is set to \(p_{mk}\). Common measures of impurity are the following.

    Gini:

    \[H(Q_m) = \sum_k p_{mk} (1 - p_{mk})\]

    Log Loss or Entropy:

    \[H(Q_m) = - \sum_k p_{mk} \log(p_{mk})\]

    Note

    The entropy criterion computes the Shannon entropy of the possible classes. It takes the class frequencies of the training data points that reached a given leaf \(m\) as their probability. Using the Shannon entropy as tree node splitting criterion is equivalent to minimizing the log loss (also known as cross-entropy and multinomial deviance) between the true labels \(y_i\) and the probalistic predictions \(T_k(x_i)\) of the tree model \(T\) for class \(k\).

    To see this, first recall that the log loss of a tree model \(T\) computed on a dataset \(D\) is defined as follows:

    \[\mathrm{LL}(D, T) = -\frac{1}{n} \sum_{(x_i, y_i) \in D} \sum_k I(y_i = k) \log(T_k(x_i))\]

    where \(D\) is a training dataset of \(n\) pairs \((x_i, y_i)\).

    In a classification tree, the predicted class probabilities within leaf nodes are constant, that is: for all \((x_i, y_i) \in Q_m\), one has: \(T_k(x_i) = p_{mk}\) for each class \(k\).

    This property makes it possible to rewrite \(\mathrm{LL}(D, T)\) as the sum of the Shannon entropies computed for each leaf of \(T\) weighted by the number of training data points that reached each leaf:

    \[\mathrm{LL}(D, T) = \sum_{m \in T} \frac{n_m}{n} H(Q_m)\]

    1.10.7.2. Regression criteria

    If the target is a continuous value, then for node \(m\), common criteria to minimize as for determining locations for future splits are Mean Squared Error (MSE or L2 error), Poisson deviance as well as Mean Absolute Error (MAE or L1 error). 2\end{aligned}\end{align} \]

    Half Poisson deviance:

    \[H(Q_m) = \frac{1}{n_m} \sum_{y \in Q_m} (y \log\frac{y}{\bar{y}_m} - y + \bar{y}_m)\]

    Setting criterion="poisson" might be a good choice if your target is a count or a frequency (count per some unit). In any case, \(y >= 0\) is a necessary condition to use this criterion. Note that it fits much slower than the MSE criterion.

    Mean Absolute Error:

    \[ \begin{align}\begin{aligned}median(y)_m = \underset{y \in Q_m}{\mathrm{median}}(y)\\H(Q_m) = \frac{1}{n_m} \sum_{y \in Q_m} |y - median(y)_m|\end{aligned}\end{align} \]

    Note that it fits much slower than the MSE criterion.

    1.10.8. Minimal Cost-Complexity Pruning

    Minimal cost-complexity pruning is an algorithm used to prune a tree to avoid over-fitting, described in Chapter 3 of [BRE]. This algorithm is parameterized by \(\alpha\ge0\) known as the complexity parameter. The complexity parameter is used to define the cost-complexity measure, \(R_\alpha(T)\) of a given tree \(T\):

    \[R_\alpha(T) = R(T) + \alpha|\widetilde{T}|\]

    where \(|\widetilde{T}|\) is the number of terminal nodes in \(T\) and \(R(T)\) is traditionally defined as the total misclassification rate of the terminal nodes. Alternatively, scikit-learn uses the total sample weighted impurity of the terminal nodes for \(R(T)\). As shown above, the impurity of a node depends on the criterion. Minimal cost-complexity pruning finds the subtree of \(T\) that minimizes \(R_\alpha(T)\).

    The cost complexity measure of a single node is \(R_\alpha(t)=R(t)+\alpha\). The branch, \(T_t\), is defined to be a tree where node \(t\) is its root. In general, the impurity of a node is greater than the sum of impurities of its terminal nodes, \(R(T_t)<R(t)\). However, the cost complexity measure of a node, \(t\), and its branch, \(T_t\), can be equal depending on \(\alpha\). We define the effective \(\alpha\) of a node to be the value where they are equal, \(R_\alpha(T_t)=R_\alpha(t)\) or \(\alpha_{eff}(t)=\frac{R(t)-R(T_t)}{|T|-1}\). A non-terminal node with the smallest value of \(\alpha_{eff}\) is the weakest link and will be pruned. This process stops when the pruned tree’s minimal \(\alpha_{eff}\) is greater than the ccp_alpha parameter.

    Examples:

    • Post pruning decision trees with cost complexity pruning

    References:

    [BRE]

    L. Breiman, J. Friedman, R. Olshen, and C. Stone. Classification and Regression Trees. Wadsworth, Belmont, CA, 1984.

    • https://en.wikipedia.org/wiki/Decision_tree_learning

    • https://en.wikipedia.org/wiki/Predictive_analytics

    • J.R. Quinlan. C4. 5: programs for machine learning. Morgan Kaufmann, 1993.

    • T. Hastie, R. Tibshirani and J. Friedman. Elements of Statistical Learning, Springer, 2009.

    sklearn.tree.plot_tree — scikit-learn 1.1.2 documentation

    sklearn.tree.plot_tree(decision_tree, *, max_depth=None, feature_names=None, class_names=None, label='all', filled=False, impurity=True, node_ids=False, proportion=False, rounded=False, precision=3, ax=None, fontsize=None)[source]

    Plot a decision tree.

    The sample counts that are shown are weighted with any sample_weights that might be present.

    The visualization is fit automatically to the size of the axis. Use the figsize or dpi arguments of plt.figure to control the size of the rendering.

    Read more in the User Guide.

    New in version 0.21.

    Parameters:
    decision_treedecision tree regressor or classifier

    The decision tree to be plotted.

    max_depthint, default=None

    The maximum depth of the representation. If None, the tree is fully generated.

    feature_nameslist of strings, default=None

    Names of each of the features. If None, generic names will be used (“X[0]”, “X[1]”, …).

    class_nameslist of str or bool, default=None

    Names of each of the target classes in ascending numerical order. Only relevant for classification and not supported for multi-output. If True, shows a symbolic representation of the class name.

    label{‘all’, ‘root’, ‘none’}, default=’all’

    Whether to show informative labels for impurity, etc. Options include ‘all’ to show at every node, ‘root’ to show only at the top root node, or ‘none’ to not show at any node.

    filledbool, default=False

    When set to True, paint nodes to indicate majority class for classification, extremity of values for regression, or purity of node for multi-output.

    impuritybool, default=True

    When set to True, show the impurity at each node.

    node_idsbool, default=False

    When set to True, show the ID number on each node.

    proportionbool, default=False

    When set to True, change the display of ‘values’ and/or ‘samples’ to be proportions and percentages respectively.

    roundedbool, default=False

    When set to True, draw node boxes with rounded corners and use Helvetica fonts instead of Times-Roman.

    precisionint, default=3

    Number of digits of precision for floating point in the values of impurity, threshold and value attributes of each node.

    axmatplotlib axis, default=None

    Axes to plot to. If None, use current axis. Any previous content is cleared.

    fontsizeint, default=None

    Size of text font. If None, determined automatically to fit figure.

    Returns:
    annotationslist of artists

    List containing the artists for the annotation boxes making up the tree.

    Examples

    >>> from sklearn.datasets import load_iris >>> from sklearn import tree 
    >>> clf = tree. DecisionTreeClassifier(random_state=0) >>> iris = load_iris() 
    >>> clf = clf.fit(iris.data, iris.target) >>> tree.plot_tree(clf) [...] 

    Plot the decision surface of decision trees trained on the iris dataset

    Plot the decision surface of decision trees trained on the iris dataset

    Understanding the decision tree structure

    Understanding the decision tree structure

    Python Decision Trees with Scikit-Learn

    Original by Scott Robinson.

    Introduction

    A decision tree is one of the most commonly and widely used supervised machine learning algorithms that can perform both regression and classification problems. The intuition behind the decision tree algorithm is simple yet very powerful.

    For each attribute in the dataset, the decision tree algorithm generates a node where the most important attribute is placed at the root node. For evaluation, we start at the root node and work our way down the tree, following the appropriate node that satisfies our condition or “decision”. This process continues until the end node containing the prediction or decision tree result is reached.

    This may seem a bit complicated at first, but you probably don't realize that you've been using decision trees to make decisions all your life without even knowing it. Imagine a situation where a person asks you to lend him a car for a day and you have to make a decision whether to lend him a car or not. There are several factors that help determine your decision, some of which are listed below:

    1. Is this person a close friend or just an acquaintance? If the person is just an acquaintance, then decline the request; if the person is a friend, then go to the next step.
    2. Is the person asking for a car for the first time? If so, lend them a car, otherwise, move on to the next step.
    3. Was the car damaged the last time they returned the car? If yes, decline the request; if not, lend them a car.

    The decision tree for the above scenario looks like this:

    Advantages of decision trees

    There are several advantages to using decision trees for predictive analysis:

    1. Decision trees can be used to predict both continuous and discrete values, i. e. they work well for both regression and classification problems.
    2. They require relatively less effort to train the algorithm.
    3. They can be used to classify non-linearly separable data.
    4. They are very fast and efficient compared to KNN and other classification algorithms.

    Implementing decision trees with Python Scikit Learn

    In this section, we implement a decision tree algorithm using the Scikit-Learn Python library. In the following examples, we will solve both classification and regression problems using a decision tree.

    Note : The classification and regression tasks were performed in a Jupyter IPython notebook.

    1. Decision Tree for Classification

    In this section, we will predict whether a banknote is genuine or counterfeit depending on four different banknote image attributes. The attributes include wavelet-transformed image variance, image kurtosis, image entropy, and image skewness.

    Dataset

    The dataset for this task can be downloaded from this link:

    The dataset for this task can be downloaded from this link:

    For more information about this dataset, check out the UCI ML repo for this dataset .

    The rest of the steps to implement this algorithm in Scikit-Learn are identical to any typical machine learning task: we import libraries and datasets, perform some data analysis, split the data into training and test sets, train the algorithm, make predictions, and finally evaluate performance algorithm on our dataset.

    Import libraries

    The following script imports the necessary libraries:

     import pandas as pd import numpy as np import matplotlib. pyplot as plt %matplotlib inline 
    Dataset import

    Dataset import

     dataset = pd.read_csv("D:/Datasets/bill_authentication.csv") 

    Dataset Import

    Data Analysis

    Run the following command to see the number of rows and columns in our dataset:

     dataset.shape 

    The result will show “(1372. 5)”, which means our dataset has 1372 records and 5 attributes.

    Run the following command to check the first five records of the dataset:

     dataset.head() 

    The result will look like this:

    0 -0.44699 -2.8073 8.6661 3.62160 0
    0 -1.46210 -2.4586 8.1674 4.54590 1
    0 0.10645 1.9242 -2.6383 3.86600 2
    0 -3.59440 -4.0112 9. 5228 3.45660 3
    0 -0.98880 4.5718 -4.4552 0.32924 4
    Preparing the data

    In this section, we split our data into attributes and labels, and then split the resulting data into training and test sets. This way we can train our algorithm on one dataset and then test it on a completely different dataset that the algorithm hasn't seen yet. This gives you a better idea of ​​how your trained algorithm will actually perform.

    To separate data into attributes and labels, run the following code:

     X = dataset. drop('Class', axis=1) y = dataset['Class'] 

    Here the variable X contains all the columns from the data set, except for the “Class” column, which is a label. Variable y contains values ​​from the “Class” column. The variable X is our set of attributes, and the variable y contains the corresponding labels.

    The final preprocessing step is to split our data into training and test sets. Library model_selection Scikit-Learn contains a method train_test_split which we will use to randomly split the data into train and test sets. To do this, run the following code:

     from sklearn. model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20) 

    In the code above, parameter test_size specifies the test set ratio we use to split 20% of the data in the test set and 80% for training.

    Training and making predictions

    Once the data has been split into train and test sets, the last step is to train the decision tree algorithm on the data and make predictions. Scikit-Learn contains the library tree which contains built-in classes/methods for various decision tree algorithms. Since we are going to perform a classification task here, we will use DecisionTreeClassifier class for this example. Method fit of this class is called to train the algorithm on the training data that is passed as a parameter to the fit method. Run the following script to train the algorithm:

     from sklearn. tree import DecisionTreeClassifier classifier = DecisionTreeClassifier() classifier.fit(X_train, y_train) 

    Now that our classifier has been trained, let's make predictions on the test data. To make forecasts, the predict method of class 9 is used0170 Decision Tree Classifier|/. Take a look at the following code to use:

     y_pred = classifier.predict(X_test) 
    Evaluation of the algorithm

    At the moment we have trained our algorithm and made some predictions. Now let's see how accurate our algorithm is. For classification problems, metrics commonly used are confusion matrix, precision, recall, and F1 score. Luckily for us, the Scikit=-Learn metrics library contains methods classification_report and confusion_matrix which can be used to calculate these metrics for us:

     from sklearn.metrics import classification_report, confusion_matrix print(confusion_matrix(y_test, y_pred)) print(classification_report(y_test, y_pred)) 

    This will result in the following estimate:

     [[142 2] 2 129]] precision recall f1-score support 0 0. 99 0.99 0.99 144 1 0.98 0.98 0.98 131 avg / total 0.99 0.99 0.99 275 

    You can see from the confusion matrix that out of 275 test instances our algorithm misclassified only 4. This is 98.5% accurate. Not so bad!

    2. Regression Decision Tree

    The process of solving a decision tree regression problem with Scikit Learn is very similar to the classification process. However, for regression, we use the DecisionTreeRegressor class of the tree library. In addition, regression scores differ from classification scores. The rest of the process is pretty much the same.

    Dataset

    The dataset we'll be using for this section is the same one used in the Linear Regression article. We will use this dataset to attempt to predict gas consumption (in millions of gallons) in the 48 US states based on gas tax (in cents), per capita income (in dollars), paved highways (in miles), and the proportion of the population with driver's license.

    Data set available at this link:

    Data set available at this link:

    Details of the data set can be found in the original source.

    The first two columns in the dataset above do not contain any useful information, so they have been removed from the dataset file.

    Now let's apply our decision tree algorithm to this data to try and predict gas consumption based on this data.

    Import libraries
     import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline 
    Import dataset
     dataset = pd. read_csv('D:\Datasets\petrol_consumption.csv') 
    Data analysis

    We will again use the data frame's head function to see what our data actually looks like:

     dataset.head() 

    The output looks like this:

    9.0 0 3571 1976 541 0.525
    9.0 1 4092 1250 524 0.572
    9.0 2 3865 1586 561 0.580
    7.5 3 4870 2351 414 0. 529
    8.0 4 4399 431 410 0.544

    Output looks like this:

     dataset.describe() 
    48.000000 Output looks like this: 48.000000 48.000000 48.000000 48.000000
    7.668333 Output looks like this: 4241.833333 5565.416667 576.770833 0.570333
    0.950770 standard 573.623768 3491.507166 111.885816 0. 055470
    5.000000 minute 3063.000000 431.000000 344.000000 0.451000
    7.000000 25% 3739.000000 3110.250000 509.500000 0.529750
    7.500000 50% 4298.000000 4735.500000 568.500000 0.564500
    8.125000 75% 4578.750000 7156.000000 632.750000 0.595250
    10.000000 maximum 5342.000000 17782.000000 986.000000 0.724000
    max

    max

    max

     X = dataset. drop('Petrol_Consumption', axis=1) y = dataset['Petrol_Consumption'] 

    Here the variable X contains all the columns from the dataset, except for the 'Petrol_Consumption' column, which is a label. Variable y contains values ​​from the 'Petrol_Consumption' column, which means that variable X contains a set of attributes, and variable y contains the corresponding labels.

    Variable here || X || contains all the columns from the dataset, except for the 'Petrol_Consumption' column, which is a label. Variable || y || contains values ​​from the 'Petrol_Consumption' column, which means that the variable || X || contains a set of attributes, and the variable || y || contains the corresponding labels.

     from sklearn. model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) 
    Variable here || X || contains all the columns from the dataset, except for the 'Petrol_Consumption' column, which is a label. Variable || y || contains values ​​from the 'Petrol_Consumption' column, which means that the variable || X || contains a set of attributes, and the variable || y || contains the corresponding labels.

    Variable 9 here0170 X contains all columns from the data set except for the 'Petrol_Consumption' column which is a label. Variable y contains the values ​​from the 'Petrol_Consumption' column, which means that the variable

    Here, the variable X contains all the columns from the dataset, except for the 'Petrol_Consumption' column, which is a label. The variable y contains the values ​​from the 'Petrol_Consumption' column, which means that the variable

     from sklearn. tree import DecisionTreeRegressor regressor = DecisionTreeRegressor() regressor.fit(X_train, y_train) 

    Here the variable X contains all the columns from the dataset, except for the 'Petrol_Consumption' column, which is a label. Variable

     y_pred = regressor.predict(X_test) 

    Now let's compare some of our predicted values ​​with the actual values ​​and

     df=pd. DataFrame({'Actual':y_test, 'Predicted':y_pred}) df 

    Now let's compare some of our predicted values ​​with actual values ​​and

    699 631.0 41
    561 524.0 2
    525 510.0 12
    640 704.0 36
    648 524.0 38
    498 510.0 9
    460 510.0 24
    508 603.0 13
    644 631.0 35

    Now let's compare some of our predicted values ​​with the actual values ​​and

    Now let's compare some of our predicted values ​​with the actual values ​​and

     from sklearn import metrics print('Mean Absolute Error:', metrics. mean_absolute_error(y_test, y_pred)) print('Mean Squared Error:', metrics.mean_squared_error(y_test, y_pred)) print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(y_test, y_pred))) 

    The following metrics are commonly used to evaluate the performance of a regression algorithm: || mean absolute error||, || root mean square error || and ||

     Mean Absolute Error: 54.7 Mean Squared Error: 4228.9Root Mean Squared Error: 65.0299930801 

    The average absolute error for our algorithm is 54.7, which is less than 10 percent of the average of all values ​​in the “Petrol_Consumption” column.

    The average absolute error for our algorithm is 54.

    7, which is less than 10 percent of the average of all values ​​in the “Petrol_Consumption” column.

    Want to learn more about Scikit-Learn and other useful machine learning algorithms? I would recommend checking out some more detailed resources like the online course:

    Output

    In this article, we have shown how the popular Scikit-Learn Python library can be used to use decision trees for both classification and regression problems. Being a fairly simple algorithm in and of itself, implementing decision trees with Scikit-Learn is even easier.

    Decision Trees in Python - Step by Step Implementation

    Original by Pankaj Kumar.

    Hello! In this article, we will focus on the key concepts of decision trees in Python. So let's get started.

    Decision trees are the simplest and most popular Learning machine learning algorithm for predictive prediction.

    The decision tree algorithm is used for regression as well as for classification problems. It's very easy to read and understand.

    What are decision trees?

    Decision trees are flowcharts, like flowcharts of tree structures of all possible decisions for a decision based on certain conditions. This is called a decision tree because it starts at the root and then splits into a series of decisions like a tree.

    The tree starts at the root node, where the most important attribute is found. The branches are part of the whole solution, and each leaf node holds the result of the solution.

    Attribute Selection Dimension

    The best attribute or feature is selected using Attribute Selection Dimension (ASM). The selected attribute is a function of the root node.

    The attribute selection dimension is a technique used to select the best attribute to discriminate among tuples. It gives a rank to each attribute, and the best attribute is selected as a split criterion.

    The most popular selection methods are:

    1. Entropy
    2. Information gain
    3. Effort ratio
    4. Gini index

    1.

    entropy

    Entropy is randomness in the processing of information.

    This measures the cleanliness of the split. It is difficult to draw conclusions from information when entropy increases. It ranges from 0 to 1. 1 means it's a completely impure subset.

    Here p (+) / p (-) = % Class of + VE / % of class

    Example:

    If there are 100 instances in our class in which 30 are positive and 70 are negative, then,

     P(+) = 3/10 and P(-) = 7/10 
     H(s)= -3/10 * log2 (3/10) - 7/10 * log2 ( 7/10) ≈ 0. 88 

    2. Obtaining information

    Strengthening information is a decrease in entropy. Decision trees use information amplification and entropy to determine which features are split into nodes to get closer to predicting the target, as well as determine when to stop splitting.

    Here S is a set of instances, a is an attribute and S V is a subset of c.

    Example:

    For general data Yes Value present 5 times and No Value present 5 times Fault Yes,

     H(s) = -[ ( 5/10) * log2 (5/10) + (5/10) * log2 (5/10) ] = 1 

    Let's analyze The true values ​​of now. Yes present 4 times and No is present 2 times.

     H(s) = -[ ( 4/6) * log2 ( 4/6) + (2/6) * log2 (2/6) ] = 0.917 

    For False values ​​ ,

     H(s)= - [ ( 3/4) * log2 (3/4) + (1/4) * log2 (1/4) ] = 0.811 
     Net Entropy = (6/10) * 0. 917 + (4/10) * 0.811 = 0.874 
     Total Reduction = 1- 0.874 = 0.126 

    This value (0.126) is called the information gain.

    3. Gain Ratio

    Gain is a modification of information acquisition. It takes into account the number and size of the branches when choosing an attribute. It takes into account inside information.

     GR(S,A) = Gain( S,A)/ IntI(S,A) 

    4.

    Gini index

    The GINI index is also a type of criterion that helps us to calculate information amplification. It measures host impurity and is calculated for binary values ​​only.

    Example:

    C1,

     P(C1) = 0/6 = 0 P(C2) = 6/6 = 1 

    Gini admixture is more computationally efficient than entropy.

    Decision Tree Algorithms in Python

    Let's take a look at some of the decision trees in Python.

    1. Iterative Dichotomizer 3 (ID3)

    This algorithm is used to select the split by calculating the acquisition of information. Gain information for each level of the tree is calculated recursively.

    2. C4.5.

    This algorithm is a modification of the ID3 algorithm. It uses information or receiver gain ratio to select the best attribute. It can handle both continuous and missing attribute values.

    3. Basket (classification and regression)

    This algorithm can perform classification as well as regression. In the tree classification, the target variable has been corrected. In a regression tree, the value of the target variable must be predicted.

    Decision tree classification using Scikit-Learn

    We will use the Scikit-Learn library to build the model and use the IRIS dataset that is already in the Scikit-Learn library or we can download it from here.

    DataSet contains three classes - Iris Setosa, Iris Versicolour, Iris Virginica with the following attributes

    • Sepin Length
    • Sepal Width
    • Petal Length
    • Petal Width

    We have to predict the class of an IRIS plant based on its attributes.

    1. First, import the required libraries

     import pandas as pd import numpy as np from sklearn.datasets import load_iris from sklearn import tree 

    2. Now download the IRIS data set

     iris=load_iris() 

    To see all features in a data set, use print function

     print(iris. feature_names) 

    Output:

     ['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)'] 

    To see all target names in a dataset

     print(iris.target_names) 

    Output:

     ['setosa' 'versicolor' 'virginica'] 

    3.

    Remove the labels

    Now we will remove the elements at position 0, 50 and 100. The 0th element belongs to the Setosa species, the 50th place belongs to the Versicolor version, and the 100th belongs to the Virgilica species.

    This will remove the labels for us to better train our decision tree classifier and check if it can classify the data well.

     #Spilitting the dataset removed =[0,50,100] new_target = np.delete(iris.target,removed) new_data = np.delete(iris.data,removed, axis=0) 

    4. Train the decision tree classifier

    The last step is to use the decision tree classifier from Scikit-Nearn for the classification.

     #train classifier clf = tree.DecisionTreeClassifier() # defining decision tree classifier clf=clf.fit(new_data,new_target) # train data on new data and new target prediction = clf.predict(iris.data[removed]) # assign removed data as input 

    We now check if your predicted labels match the original labels

     print("Original Labels",iris. target[removed]) print("Labels Predicted", prediction) 

    Output:

     Original Labels [0 1 2] Labels Predicted [0 1 2] 

    Wow! Our model is 100% accurate. Build decision tree

     tree.plot_tree(clf) 

    Conclusion

    In this tutorial, we learned about some important concepts such as best attribute selection, information amplification, entropy, gain ratio, and the GINI index for decision trees.


    Learn more