Slickplan

What is a decision tree (parts, types & algorithm examples)

What is a decision tree and what are decision trees used for are two pretty common questions asked when talking about flowcharts. Since you’re here, they’ve likely crossed your mind, and you’re in the right place to learn. We’ll break down the different parts and applications in which to use them, and by the end, you’ll find it much easier to work with this valuable tool.

What is a decision tree?

A decision tree is a supervised machine learning algorithm used in tasks with classification and regression properties.

To put it more visually, it’s a flowchart structure where different nodes indicate conditions, rules, outcomes and classes.

Developed in the early 1960s, decision trees are primarily used in data mining, machine learning and statistics.

At its most basic, a decision tree (also known as an answer tree) is a flowchart tool that can identify, represent, predict, suggest, answer and explain a long list of questions, statements, concepts and situations.

While this decision tree definition may seem a bit complicated, decision tree models are actually easier to work with than you might think.

What are the different types of decision trees?

There are two types of decision trees; classification trees and regression trees. From there, they are split into different algorithms and use various nodes and branches to make them whole. It’s absolutely imperative that you choose one that directly suits the purpose of your decision tree.

Choosing the right one is pretty straightforward. Using the example questions added to each type below, you can compare your needs and easily see which category yours falls into.

Classification trees

Classification trees typically deal with "yes" or "no" questions and are best suited to solve real-world problems and topics. I.e., did the customer have a good experience? Was the order shipment complete?

Regression trees

Regression trees are designed to predict continuous values and are built from historical data. Examples include how many computers will we sell this quarter? Which store location will have the most traffic on the next holiday?

🎬 Learn what Slickplan can do!

We filmed a short video to show you exactly how to use Slickplan

Types of decision tree algorithms

There are many types of decision tree algorithms used today for all sorts of tasks, but we’ve chosen three of the more popular versions and illustrated their general abilities. Later on, we’ll demonstrate what each of them can be used for; ID3, Chi-Square and Reduction in Variance.

All types can be used to solve problems or answer questions from the simple to the most intricate and detailed

ID3

Iterative Dichotomiser 3, or ID3, developed by Ross Quinlan in 1983, is a nominal decision tree used to split two or more features into groups at every step.

It uses a top-down greedy approach, which means you start from the top and go down, while "greedy" means you pick the best option at that particular moment and move to the next step.

Chi-Square

Also known as CHAID (Chi-Squared automatic interaction detection) is a highly visual and easy-to-understand decision tree that uses input variables to help decide the best possible result.

These can often be used in direct marketing situations where having an idea of how participants are likely to best respond would be helpful.

Chi-square also allows for much more precision and accuracy because nodes can be split multiple times and allow for more data to be processed.

Reduction in Variance

This algorithm splits the chosen nodes in a continuously changing target variable.

It’s named after the fact that it uses variance to measure and decide how nodes are split into child nodes or sub-nodes.

Decision tree nodes: the parts of a decision tree structure

Decision trees are made of a few simple parts which can be applied to any type of tree as well as all of the algorithms.

This is the easier part to learn (and they’re universal, so you don’t need to memorize a new set for every project).

Decision tree root node

Root nodes (also known as parent nodes, meaning they have nodes under them in a parent-child structure) are the highest level of a decision tree, a starting point from which the rest of the tree grows.

This node represents the question, task or problem the tree stems from.

Decision tree internal node

An internal node inside of a decision tree in which the preceding node branches out into two or more variables.

Decision tree leaf node

Also known as external nodes or terminal nodes, these are the endpoints and have no child nodes. It’s always found furthest from the root node and is where the answer or solution is found.

Decision tree pruning

Pruning is the process of slimming down variables by removing nodes leaving only the most critical nodes and potential outcomes.

Decision tree splitting

The opposite of pruning — this divides nodes into two or more variables in the system.

Decision tree sub-tree or branch

This is a specific section of a decision tree. It contains multiple internal nodes and potentially some leaf nodes depending on the specific branch in question.

How to read a decision tree

Reading a decision tree is pretty straightforward; starting at the root node and moving to each internal node shows you each decision point and how the end results came to be.

Interpreting a decision tree comes in two parts. Going through the system of questions and data and then accepting the results as compared to the original question.

Ideally, you’ve entered only what’s needed; nothing more, nothing less. It’s tempting to skip to the end to get your answer, but you’d be skipping the how and why of the solution, which could result in redundancies and other not-so-great inefficiencies.

In other words, using the tool properly means respecting the entire process rather than just jumping to the results.

Learning how to make a decision tree is simple, and Slickplan has all the tools you need (including decision tree templates) to get the job done.

When to use decision tree algorithms (with decision tree examples)

Decision tree algorithms are used in various scenarios such as (but not limited to) healthcare, retail, advertising and finance. They’re also especially helpful in situations where data is missing because the process guides you toward filling in those gaps.

At this point, we can take a look at algorithms and answer things like what is a decision tree analysis and, importantly, why use decision trees.

And what does a decision tree look like? We’ll show you something for each of the examples, don’t worry.

Decision trees machine learning example

Our first decision tree learning algorithm example shown below, the decision tree ML (machine learning), illustrates a simple problem divided into nodes to create a structure that illustrates the most suitable outcome.

Decision tree machine learning example

What is a decision tree in machine learning?

Machine Learning (ML), a type of AI, focuses on making algorithms that allow machines to learn from data inputs and make predictions or decisions without the need for a lot of programming, if any.

A decision tree algorithm in machine learning is a highly effective tool for decision-making using data points that make predictions using machine learning. It’s a non-parametric method that is considered supervised learning which predicts target variables.

In other words, an ML decision tree uses a flexible number of data points to make predictions instead of a fixed number of points.

AI decision tree example

The decision tree model example below illustrates a style created solely from AI tools.
In this model, we’ve chosen an AI decision tree that may be used for medical purposes.

Decision tree created from AI tools

What is a decision tree in AI?

An AI (Artificial Intelligence) decision tree program is often used to predict specific future events. Both AI and the above-mentioned ML are fundamentally the same concepts; AI and ML use decision trees as a model for decision-making and the performance of various tasks.

AI is a broad field of work combining various techniques for simulating intelligent behavior in machines.

The nodes will continue to be narrowed down until only a single node is left over, leaving the best answer. A few examples of AI decision trees include credit scoring, medical diagnoses and detecting fraud.

At the most basic level of decision trees in both AI and ML, the main differences are how they are created and used.

AI decision trees are often created by hand (in an app or on paper) based on expert input, while ML trees are pieced together automatically by ML data.

ML decision trees are quite valuable as they possess the ability to handle complex datasets, while AI decision trees use human expert insights.

Data analysis decision tree example

Our final decision tree algorithm example shown below highlights a breakdown of risk assessment for XYZ Corp, a made-up company for this sample.

Decision tree showing risk assessment

What is a decision tree in data analysis?

Decision tree analysis uses the collection of all of the data, the process of cleaning (pruning and splitting) nodes to determine the best perspective and outcome using the most data — without overfitting (learn what that means below!), of course.

Pros and cons of decision trees

In this section, we’ll discuss the advantages and disadvantages of decision trees of all makes and models.

Advantages of decision trees

  • Decision tree models are very visual and are a great tool for explaining technical issues to a wide range of people, including stakeholders.
  • They’re also easy to follow with little to no training for the reader.
  • They’re able to handle a long list of problems, issues and questions and can be built with pretty much any input data
  • They can be used for questions and problems of all sizes and can be scaled up or down
  • Decision trees are excellent at identifying the importance of specific attributes, which can be helpful in other areas of research in projects

Disadvantages of decision trees

  • Overfitting is a major issue one can run into when building a decision tree. It refers to setting a model that fits the training data too well and fails to come to a final result that is meaningful.

    This can often happen when pruning and splitting go too far. One simple way to prevent this is by not going overboard. It’s sort of right there in the name.

    Adding too much data is like asking a question you already know the answer to. Feeding in data that proves a point too well is less accurate than giving solid data that is relevant to the cause. It’s a modeling error that happens when functions are too in line with a very limited set of data points. The resulting tree spits out answers that are only relevant to the limited info you gave it and has no chance of offering a more accurate answer based on sufficient data.

  • Decision trees can also end up with too many answers. This can happen when it’s incomplete, or there are too many leaf nodes and not a clear end result.
  • In other words, not too little, not too much. You’re aiming for Goldie Locks; just right. 🐻🐻🐻

Decision tree terminology recap

Here is a quick recap of some of the terminology we’ve used in this article.

  • Parent node: A node that has a child or sub-nodes.
  • Child node: A node that has a parent above or preceding it. These may also later be a parent to another child node.
  • Branch: the lines used to connect nodes at the split points.
  • Splitting: Breaking nodes into two or more variables.
  • Pruning: trimming down nodes to only include the best possible options.

Start using decision tree models yourself!

Now that you have the tools needed to create a decision tree, we’re certain you’ll find it easy to build one (or many) right here in Slickplan! It’s the most intuitive tool to design and share your work for a whole list of needs.

Decision trees can be used in many situations, from business to medical to planning activities or sales. They’re a great way to narrow down lots of data and make decisions that allow you to see a visual representation of all available options.

Sign up for Slickplan’s free 14-day trial (no credit card required) and begin building your decision tree today!

Learn more about our diagramming tool with these videos in Help Desk and our Diagram Maker feature page.

Think visually. Improve UX with Slickplan

Build intuitive user flows, stronger customer journeys and improve information architecture.

14-day free trial
No credit card required

FAQs

  • Is a decision tree supervised or unsupervised?

    A decision tree analysis is a supervised data mining technique. It's used in many industries where data analytics and machine learning are pertinent to operations research. Supervised meaning a flexible number of data points rather than unsupervised, where the points are a fixed or set number.

  • What is decision tree entropy?

    Decision tree entropy is the measure of disorder or impurity in a dataset. In layman's terms, it's how much uncertainty there is in a given set of data by which adding clarity reduces entropy. For example: having too many variables that could be represented by fewer options where nodes split.

  • What do decision trees tell you?

    Decision trees map out all possible outcomes of a series of related choices to find the best and most logical answer or solution. In short, they help you make the best decision. It is a clear path to an answer or solution that shows its work along the way.

Sean LeSuer

Want more free content like this?

Tips & tricks, how-to’s and deep dives delivered to your inbox 🚀

Think visually. Improve UX with Slickplan

14-day free trial
No credit card required

You might also like

How to make a UML class diagram (and others) with examples

How to make a UML class diagram (and others) with examples

Understanding systems is what UML diagrams are all about. Whether it’s a UML class diagram, a sequence diagram or any of the other 10+ out there, there’s a method to…

Communicate your vision with diagrams

Sign up