The 7 Surprising Facts About Decision Trees

Jean de Dieu Nyandwi
2 min readAug 19, 2021

Decision trees are one of the powerful machine learning algorithms that can be used to model complex datasets. They can be used in both regression and classification problems.

In this article, I want to highlight 7 interesting facts that this kind of learning algorithm is very particular about.

Fact #1

Other than most machine learning models which are black boxes, decision trees are white boxes. The predictions made by decision trees are explainable.

Because trees make it possible to visualize the prediction process in the flow chart format, it’s pretty easy to interpret the results. Other machine learning algorithms require additional tools to go deep in order to understand how they do what they do.

Fact #2

Decision trees do not require feature scaling.

Fact #3

Decision trees handle categorical features in the raw text format (Scikit-Learn doesn’t support this, TensorFlow’s trees implementation does).

Fact #4

While most models will suffer from missing values, decision trees are okay with them.

Fact #5

Trees can handle imbalanced datasets. You will only have to adjust the weights of the classes.

Fact #6

Trees can provide the feature importances or how much each feature contributed to the model training results.

Fact #7

Trees are the basic building blocks of ensemble methods such as random forests and gradient boosting machines.

A well-known downside of decision trees is that they tend to overfit the data easily(pretty much assumed they will always overfit at first) but with a little bit of hyperparameters tuning, they can give great results especially using them for complex structured (or tabular) datasets.

Thanks for reading.

Each week, I write one article about machine learning techniques, ideas, or best practices. You can help the article to reach many people by sharing it with friends or tech communities that you are a part of.

And connect with me on Twitter!

--

--