Machine Learning Decision Tree
Different branches that have unique extensions are developed. The purpose of a decision tree is to exhaust the dynamics of the existing data into the tiniest tree. The idea of decreasing the size of the tree is based on the logic that the smallest available description for a group of concepts is preferred. Besides, it is easier to look, interpret, and decisions faster when using small trees compared when larger trees are used. Several approaches are applied to explore the depth of a decision tree. R. Quinlan. The approach applies a top-down search technique throughout the entire process capturing all the branches while avoiding backtracking. J. R. It is easy to understand their identity because they are built from the data therein.
The purity of the subdivisions becomes optimized every time a new node is established. Thus, when constructing the tree comes to an end, a clear and conclusive answer is obtained (Dangeti, 2017). The figure below shows how the division of data takes place. Blue crosses indicate a distinct feature from the one explored by the red rings. Appropriate additions are conducted to attain the total entropy for each subset (Mitchell, 2010). The outcome of the subsequent entropy is always subtracted from the earlier sets. The reduced entropy is what is referred to as the information gain. Third, the attribute with the highest information gain is identified and considered as the decision node (Powell & Ryzhov, 2012). The dataset is then split using its branches and the earlier procedure repeated.
If the data that was correctly categorized is added to the one that was incorrectly classified, the percentage should indicate the sample space. Equally, it is significant to determine whether the approach divides the positive and negative items in the required manner. The procedure is vital in helping to identify whether the framework provides an accurate or inaccurate prediction. Strengths Decision tree model has various strengths, which include the following; first, the decision tree model is comparatively cheap because its cost of predicting information reduces as a level of data added. Second, the approach is useful in both the numerical or categorical information thereby exhaustive because it captures every kind of data (Dangeti, 2017). First, they came with predictive factors that showed how the number of deaths was attached to the Nosocomial infections.
Then they created an algorithm that showed the risk of infections, which was used as a reference for the recommendations made to better the healthcare services provided by the hospitals. Their findings showed that patients that used two or more antibiotics and those that suffered from invasive procedures had a shorter time to live after they were infected. Equally, the decision tree model that was employed showed that two different groups that depicted high rates of deaths were created. In the first group, deaths were certain in less than 11 days. , Montanari, A. , & Sciavicco, G. Decision Tree Pruning via Multi-Objective Evolutionary Computation. International Journal of Machine Learning And Computing, 7(6), 167-175. doi: 10. Machine Learning, 15(1), 25-41. doi: 10. 1007/bf01000407 Lopes, Julia M. M. et al.
From $10 to earn access
Only on Studyloop