A survey on big data analytics with deep learning in text using machine learning mechanisms

  • Abstract
  • Keywords
  • References
  • PDF
  • Abstract

    Big Data Analytics and Deep Learning are two immense purpose of meeting of data science. Big Data has ended up being major a tantamount number of affiliations both open and private have been gathering huge measures of room specific information, which can contain enduring information about issues, for instance, national cognizance, motorized security, coercion presentation, advancing, and healing informatics. Relationship, for instance, Microsoft and Google are researching wide volumes of data for business examination and decisions, influencing existing and future progression. Critical Learning figuring's isolate odd state, complex reflections as data outlines through another levelled learning practice. Complex reflections are learnt at a given level in setting of all around less asking for thoughts figured in the past level in the dynamic framework. An indispensable favoured perspective of Profound Learning is the examination and culture of beast measures of unconfirmed data, making it a fundamental contraption for Great Statistics Analytics where offensive data is, everything seen as, unlabelled and un-arranged. In the present examination, we investigate how Deep Learning can be used for keeping an eye out for some essential issues in Big Data Analytics, including removing complex cases from Big volumes of information, semantic asking for, information naming, smart data recovery, and streamlining discriminative errands .Deep learning using Machine Learning(ML) is continuously unleashing its power in a wide range of applications. It has been pushed to the front line as of late mostly attributable to the advert of huge information. ML counts have never been remarkable ensured while tried by gigantic data. Gigantic data engages ML counts to uncover more fine-grained cases and make more advantageous and correct gauges than whenever in late memory with deep learning; on the other hand, it exhibits genuine challenges to deep learning in ML, for instance, show adaptability and appropriated enlisting. In this paper, we introduce a framework of Deep learning in ML on big data (DLiMLBiD) to guide the discussion of its opportunities and challenges. In this paper, different machine learning algorithms have been talked about. These calculations are utilized for different purposes like information mining, picture handling, prescient examination, and so forth to give some examples. The fundamental favourable position of utilizing machine learning is that, once a calculation realizes what to do with information, it can do its work consequently. In this paper we are providing the review of different Deep learning in text using Machine Learning and Big data methods.



  • Keywords

    Big data, deep learning, text extraction, machine learning.

  • References

      [1] Bikku T, Rao NS & Akepogu AR, “Hadoop based feature selection and decision making models on Big Data”, Indian Journal of Science and Technology, Vol.9, No.10, (2016)

      [2] Panda B, Herbach J, Basu S & Bayardo R, “Map Reduce and its application to massively parallel learning of decision tree ensembles”, Scaling Up Machine Learning: Parallel and Distributed Approaches, Cambridge, U.K.: Cambridge Univ. Press, (2012).

      [3] Dahl GE, Yu D, Deng L & Acero A, “Context-dependent pretrained deep neural networks for large-vocabulary speech recognition”, IEEE Trans. Audio, Speech, Lang. Process., Vol.20, No.1, (2012), pp.30–41.

      [4] Jones N, “Computer science: The learning machines”, Nature, Vol.505, No.7482, (2014), pp.146–148.

      [5] Bengio Y, Courville A & Vincent P, ‘Representation learning: A review and new perspectives”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.35, No.8, (2013), pp.1798–1828.

      [6] Meng X, Bradley J, Yavuz B, Sparks E, Venkataraman S, Liu D, Freeman J, Tsai DB, Amde M, Owen S & Xin D, “Mllib: Machine learning in apache spark”, The Journal of Machine Learning Research, Vol.17, No.1, (2016), pp.1235-1241.

      [7] Nodarakis N, Sioutas S, Tsakalidis AK & Tzimas G, “Large scale sentiment analysis on twitter with spark”, EDBT/ICDT Workshops, (2016), pp.1–8.

      [8] Mikolov T, Deoras A, Kombrink S, Burget L & Cernocky J, “Empirical evaluation and combination of advanced language modeling techniques”, Twelfth Annual Conference of the International Speech Communication, (2011), pp.605–608

      [9] Collobert R, Weston J, Bottou L, Karlen M, Kavukcuoglu K & Kuksa P, “Natural language processing almost from scratch”, J. Mach. Learn. Res., Vol.12, (2011), pp.2493–2537.

      [10] Scherer D, Müller A & Behnke S, “Evaluation of pooling operations in convolutional architectures for object recognition”, Proc. Int. Conf. Artif. Neural Netw., (2010), pp. 92–101.

      [11] Boyd S, Parikh N, Chu E, Peleato B & Eckstein J, “Distributed optimization and statistical learning via the alternating direction method of multipliers”, Foundations Trends Mach Learn, Vol.3, No.1, (2011).

      [12] Christ PF, Elshaer MEA, Ettlinger F, Tatavarty S, Bickel M, Bilic P, Rempfler M, Armbruster M, Hofmann F, DAnastasi M & Sommer WH, “Automatic liver and lesion segmentation in ct using cascaded fully convolutional neural networks and 3d conditional random fields”, International Conference on Medical Image Computing and Computer Assisted Intervention, (2016), pp.415–423.

      [13] Grobelnik M, Big Data Tutorial. European Data Forum, (2013).

      [14] Chen M, Xu ZE, Weinberger KQ & Sha F, “Marginalized denoisingautoencoders for domain adaptation”, Proceeding of the 29th International Conference in Machine Learning, Edingburgh, Scotland, (2012).

      [15] Hutchinson B, Deng L & Yu D, “Tensor deep stacking networks”, IEEE Trans. Pattern Anal. Mach. Intell., Vol.35, No.8, (2013), pp.1944–1957.

      [16] Le QV, “Building high-level features using large scale unsupervised learning”, Proc. Int. Conf. Mach. Learn., (2012).

      [17] Duchi J, Hazan E & Singer Y, “Adaptive subgradient methods for online learning and stochastic optimization”, J. Mach. Learn. Res., Vol.12, (2011), pp.2121–2159.

      [18] Coats A, Huval B, Wng T, Wu D & Wu A, “Deep Learning with COTS HPS systems”, J. Mach. Learn. Res., Vol.28, No.3, (2013), pp.1337–1345.

      [19] Sugiyama M & Kawanabe M, Machine Learning in Non-Stationary Environments: Introduction to Covariate Shift Adaptation, Cambridge, MA, USA: MIT Press, (2012).

      [20] Glorot X, Bordes A & Bengio Y, “Domain adaptation for large-scale sentiment classification: A deep learning approach”, Proc. 28th Int. Conf. Mach. Learn., (2011).

      [21] Garnelo M, Arulkumaran K & Shanahan M, “Towards Deep Symbolic Reinforcement Learning”, NIPS Workshop on Deep Reinforcement Learning, (2016).

      [22] Kotsiantis SB, “Supervised Machine Learning: A Review of Classification Techniques”, Informatica, Vol.31, (2007), pp.249-268.

      [23] Zhu X & Goldberg AB, “Introduction to Semi–Supervised Learning”, Synthesis Lectures on Artificial Intelligence and Machine Learning, Vol.3, No.1, (2009), pp.1-130

      [24] Goodfellow I, Lee H, Le QV, Saxe A & Ng AY, “Measuring invariances in deep networks”, Advances in Neural Information Processing Systems, Curran Associates, Inc, (2009), pp.646–654.

      [25] Sutton RS, “Introduction: The Challenge of Reinforcement Learning”, Machine Learning, Kluwer Academic Publishers, Boston, Vol.8, (1992), pp.225-227.

      [26] Kaelbing LP, Littman ML & Moore AW, “Reinforcement Learning: A Survey”, Journal of Artificial Intelligence Research, Vol.4, (1996), pp.237-285.

      [27] Opitz D & Maclin R, “Popular Ensemble Methods: An Empirical Study”, Journal of Artificial Intelligence Research, Vol.11, (1999), pp.169-198.

      [28] Zhou ZH, “Ensemble Learning”, National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China.




Article ID: 12398
DOI: 10.14419/ijet.v7i2.21.12398

Copyright © 2012-2015 Science Publishing Corporation Inc. All rights reserved.