A Machine Level Approach for Mining the Big Data in Context with Random Forest

  IJCOT-book-cover
 
International Journal of Computer & Organization Trends  (IJCOT)          
 
© 2019 by IJCOT Journal
Volume - 9 Issue - 2
Year of Publication : 2019
Authors :  Sambaraja Sravani, A.Ravi Kumar
DOI : 10.14445/22492593/IJCOT-V9I2P305

Citation

MLA Style:Sambaraja Sravani, A.Ravi Kumar "A Machine Level Approach for Mining the Big Data in Context with Random Forest" International Journal of Computer and Organization Trends 9.2 (2019): 17-21.

APA Style:Sambaraja Sravani, A.Ravi Kumar (2019). A Machine Level Approach for Mining the Big Data in Context with Random Forest. International Journal of Computer and Organization Trends, 9(2), 17-21.

Abstract

Irregular backwoods technique is a standout amongst the most generally connected grouping calculations at introduce. From the genuine huge information scene and prerequisites, the utilization of arbitrary backwoods technique in the huge information condition to direct inside and out investigation. Because of the huge information requirements to process countless in the meantime, and the information design changes always after some time, the exactness of a arbitrary woodland calculation without self-recharging and versatile calculation will steadily decrease after some time. Going for this issue, examination on the qualities of arbitrary woodland strategy, exhibits how to understand the self-adjustment capacity with irregular timberland technique in comparative circumstances, and checked the attainability of the new technique for utilizing the genuine information, and examination and discourse of how to additionally inquire about and enhance the arbitrary woods strategy in huge information condition.

References

[1] Han J, Kamber M. Information Mining: Concepts and Techniques[M]. San
[2] Francisco, CA: Morgan Kaufmann Publishers, 2001
[3] Liu B, Ma Y, Wong C K 2001 Classification utilizing affiliation rules: shortcoming and upgrades. In Vipin Kumar, et al. Information Mining for Logical Applications
[4] Bernardo J M, Smith A F M 2001 Bayesian Theory. Estimation Science and Technology 12 211
[5] Liu Hongyan, Chen Jian, Chen Guoqing et al 2002. Audit of Characterization Algorithms in Data Mining Journal of Tsinghua College (Science and Technology) 42(6) 727-30
[6] Li Xiujuan, Tian Chuan, Feng Xin, et al 2010 Research on Characterization Technology in Data Mining Modern Electronics System 33(20) 86-8
[7] Li Xuechan 2008 Research on Classification Calculation Way of a Awesome Amount of Data According to the Database Sampling Computer Science 35(6) 299-cover 3
[8] Shafer J, Arawal R, Mehta M 1996 SPRINT: a scable parallel classifier for information mining Proceedings of the 22th International Conference on Large Data Bases 544-55
[9] Breiman L 2001 Random Forests Machine Learning 45(1) 5-32
[10] Zhang H P, Wang M H 2009 Search for the littlest irregular timberland, Detail. Interface l2 381-8
[11] Robnik-Sikonja M 2004 Improving Random Forests Proceedings of the fifteenth European Conference on Machine Learning 359-70
[12] Leistner C, Saffari A, Santner J, et al 2009 Semi-directed arbitrary woodlands IEEE twelfth International Conference on Computer Vision 506-13
[13] Shen Chunhua and Li Hanxi 2010 Boosting through Optimization of edge Distributions IEEE Transactions on Neural Networks 21(4) 659-75.

Keywords
Decision tree, Random Forest, Big Data