Design of modified ripper algorithm to predict customer churn

  • Authors

    • M. Rajeswari Bharathiar University
    • T. Devi Bharathiar University
    2015-05-30
    https://doi.org/10.14419/ijet.v4i2.4221
  • Churn, Class Imbalance, Customer Relationship Management, Data Mining.
  • Technologies such as data warehousing, data mining, and campaign management software have made Customer Relationship Management (CRM) a new area where firms can gain a competitive advantage. It is becoming common knowledge in business that retaining existing customers is an important strategy to survive in industry. Once identified, these customers can be targeted with proactive retention campaigns in a bid to retain them. These proactive marketing campaigns usually involve the offering of incentives to attract the customer into carrying on their service with the supplier. These incentives can be costly, so offering them to customers who have no intention to defect results in lost revenue. Also many predictive techniques do not provide significant time to make customer contact. This time restriction does not allow sufficient time for capturing those customers who are intending to leave. This research aims to develop methodologies for predicting customer churn in advance, while keeping misclassification rates to a minimum.

  • References

    1. [1] Breiman L., Friedman J., Olshen R., and Stone C. Classiï¬cation and Regression Trees. Wadsworth Int. Group, 1984.

      [2] Hancock T. R., Jiang T., Li M., Tromp J., Lower Bounds on Learning Decision Lists and Trees. Information and Computation 126(2): 114-122, 1996. http://dx.doi.org/10.1006/inco.1996.0040.

      [3] Ho, Tin Kam (1995). Random Decision Forest. Proceedings of the 3rd International Conference on Document Analysis and Recognition, Montreal, QC, 14–16 August 1995. pp. 278–282.

      [4] Hyaï¬l L. and Rivest R.L., Constructing optimal binary decision trees is NPcomplete. Information Processing Letters, 5(1):15-17, 1976. http://dx.doi.org/10.1016/0020-0190(76)90095-8.

      [5] Kleinberg, Eugene (1996). An Overtraining-Resistant Stochastic Modeling Method for Pattern Recognition. Annals of Statistics 24 (6): 2319–2349. http://dx.doi.org/10.1214/aos/1032181157.

      [6] Naumov G.E., NP-completeness of problems of construction of optimal decision trees. Soviet Physics: Doklady, 36(4):270-271, 1991.

      [7] Quinlan, J.R., Simplifying decision trees, International Journal of ManMachine Studies, 27, 221-234, 1987. http://dx.doi.org/10.1016/S0020-7373(87)80053-6.

      [8] Weiss.G.M, Provost.F, The effect of class distribution on classifier learning: An empirical study, Dept. Comput. Sci., Rutgers Univ., Newark, NJ, Tech. Rep. TR-43, 2001.

      [9] Weiss.G.M, Provost.F, Learning when training data are costly: The effect of class distribution on tree induction, J. Artif. Intell. Res., vol. 19, pp. 315–354, 2003.

      [10] Zantema, H., and Bodlaender H. L., Finding Small Equivalent Decision Trees is Hard, International Journal of Foundations of Computer Science, 11(2):343-354, 2000. http://dx.doi.org/10.1142/S0129054100000193.

  • Downloads

    Additional Files

  • How to Cite

    Rajeswari, M., & Devi, T. (2015). Design of modified ripper algorithm to predict customer churn. International Journal of Engineering & Technology, 4(2), 408-413. https://doi.org/10.14419/ijet.v4i2.4221