Long Short Term Memory Recurrent Network for Standard and Poor’s 500 Index Modelling

  • Authors

    • Said Jadid Abdulkadir
    • Hitham Alhussian
    • Muhammad Nazmi
    • Asim A Elsheikh
    2018-10-07
    https://doi.org/10.14419/ijet.v7i4.15.21365
  • Financial Time-Series, Long Short Term Memory, Recurrent Network, Standard and Poor’s 500 Index.
  • Abstract

    Forecasting time-series data are imperative especially when planning is required through modelling using uncertain knowledge of future events. Recurrent neural network models have been applied in the industry and outperform standard artificial neural networks in forecasting, but fail in long term time-series forecasting due to the vanishing gradient problem. This study offers a robust solution that can be implemented for long-term forecasting using a special architecture of recurrent neural network known as Long Short Term Memory (LSTM) model to overcome the vanishing gradient problem. LSTM is specially designed to avoid the long-term dependency problem as their default behavior. Empirical analysis is performed using quantitative forecasting metrics and comparative model performance on the forecasted outputs. An evaluation analysis is performed to validate that the LSTM model provides better forecasted outputs on Standard & Poor’s 500 Index (S&P 500) in terms of error metrics as compared to other forecasting models.

     

     

  • References

    1. [1] Coelho, I. M., Coelho, V. N., Luz, E. J. D. S., Ochi, L. S., Guimarães, F. G., & Rios, E. (2017). A GPU deep learning metaheuristic based model for time series forecasting. Applied Energy, 201, 412-418.

      [2] Gilliland, M. (2017). Changing the paradigm for business forecasting. Foresight: The International Journal of Applied Forecasting, 2017(44), 29-35.

      [3] Abdulkadir, S. J., & Yong, S. P. (2013). Unscented Kalman filter for noisy multivariate financial time-series data. Proceedings of the International Workshop on Multi-Disciplinary Trends in Artificial Intelligence, pp. 87-96.

      [4] Abdulkadir, S. J., Shamsuddin, S. M., & Sallehuddin, R. (2012). Moisture prediction in maize using three term back propagation neural network. International Journal of Environmental Science and Development, 3(2), 199-204.

      [5] Abdulkadir, S. J., Yong, S. P., & Zakaria, N. (2016). Hybrid neural network model for metocean data analysis. Journal of Informatics and Mathematical Sciences, 8(4), 245-251.

      [6] Abdulkadir, S. J., Yong, S. P., & Alhussian, H. (2016). An enhanced ELMAN-NARX hybrid model for FTSE Bursa Malaysia KLCI index forecasting. Proceedings of the IEEE 3rd International Conference on Computer and Information Sciences, pp. 304-309.

      [7] Ahmad, A. S., Hassan, M. Y., Abdullah, M. P., Rahman, H. A., Hussin, F., Abdullah, H., & Saidur, R. (2014). A review on applications of ANN and SVM for building electrical energy consumption forecasting. Renewable and Sustainable Energy Reviews, 33, 102-109.

      [8] Abdulkadir, S. J., & Yong, S. P. (2015). Scaled UKF–NARX hybrid model for multi-step-ahead forecasting of chaotic time series data. Soft Computing, 19(12), 3479-3496.

      [9] Liu, D., Niu, D., Wang, H., & Fan, L. (2014). Short-term wind speed forecasting using wavelet transform and support vector machines optimized by genetic algorithm. Renewable Energy, 62, 592-597.

      [10] Abdulkadir, S. J., Yong, S. P., Marimuthu, M., & Lai, F. W. (2014). Hybridization of ensemble Kalman filter and non-linear auto-regressive neural network for financial forecasting. In R. Prasath, P. O’Reilly, & T. Kathirvalavakumar (Eds.), Mining Intelligence and Knowledge Exploration, Lecture Notes in Computer Science, vol 8891. Cham: Springer, pp. 72-81.

      [11] He, M., Yang, L., Zhang, J., & Vittal, V. (2014). A spatio-temporal analysis approach for short-term forecast of wind farm generation. IEEE Transactions on Power Systems, 29(4), 1611-1622.

      [12] Abdulkadir, S. J., & Yong, S. P. (2014). Empirical analysis of parallel-NARX recurrent network for long-term chaotic financial forecasting. Proceedings of the IEEE International Conference on Computer and Information Sciences, pp. 1-6.

      [13] Manashty, A., & Thomson, J. L. (2017). A new temporal abstraction for health diagnosis prediction using deep recurrent networks. Proceedings of the ACM 21st International Database Engineering and Applications Symposium, pp. 14-19.

      [14] Bengio, Y., Frasconi, P., & Simard, P. (1993). The problem of learning long-term dependencies in recurrent networks. Proceedings of the IEEE International Conference on Neural Networks, pp. 1183-1188.

      [15] Alfi, V., Coccetti, F., Petri, A., & Pietronero, L. (2007). Roughness and finite size effect in the NYSE stock-price fluctuations. European Physical Journal B, 55(2), 135-142.

      [16] Lin, T., Horne, B. G., & Giles, C. L. (1998). How embedded memory in recurrent neural network architectures helps learning long-term temporal dependencies. Neural Networks, 11(5), 861-868.

      [17] Sak, H., Senior, A., & Beaufays, F. (2014). Long short-term memory recurrent neural network architectures for large scale acoustic modeling. Proceedings of the Fifteenth Annual Conference of the International Speech Communication Association, pp. 338-342.

      [18] Bengio, Y. (2009). Learning deep architectures for AI. Foundations and Trends® in Machine Learning, 2(1), 1-127.

      [19] Glorot, X., & Bengio, Y. (2010). Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 249-256.

      [20] Sainath, T. N., Vinyals, O., Senior, A., & Sak, H. (2015). Convolutional, long short-term memory, fully connected deep neural networks. Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 4580-4584.

      [21] Pascanu, R., Mikolov, T., & Bengio, Y. (2013). On the difficulty of training recurrent neural networks. Proceedings of the International Conference on Machine Learning, pp. 1310-1318.

      [22] Squartini, S., Hussain, A., & Piazza, F. (2003). Attempting to reduce the vanishing gradient effect through a novel recurrent multiscale architecture. Proceedings of the IEEE International Joint Conference on Neural Networks, pp. 2819-2824.

      [23] Bengio, Y., Simard, P., & Frasconi, P. (1994). Learning long-term dependencies with gradient descent is difficult. IEEE Transactions on Neural Networks, 5(2), 157-166.

      [24] Neumann, M., & Skiadopoulos, G. (2013). Predictable dynamics in higher-order risk-neutral moments: Evidence from the S&P 500 options. Journal of Financial and Quantitative Analysis, 48(3), 947-977.

      [25] Carrion, A. (2013). Very fast money: High-frequency trading on the NASDAQ. Journal of Financial Markets, 16(4), 680-711.

      [26] Jeanne, O., & Korinek, A. (2010). Excessive volatility in capital flows: A Pigouvian taxation approach. American Economic Review, 100(2), 403-407.

  • Downloads

  • How to Cite

    Jadid Abdulkadir, S., Alhussian, H., Nazmi, M., & A Elsheikh, A. (2018). Long Short Term Memory Recurrent Network for Standard and Poor’s 500 Index Modelling. International Journal of Engineering & Technology, 7(4.15), 25-29. https://doi.org/10.14419/ijet.v7i4.15.21365

    Received date: 2018-10-09

    Accepted date: 2018-10-09

    Published date: 2018-10-07