Implementation of modified SARSA learning technique in EMCAP

  • Authors

    • D. Ganesha
    • Vijayakumar Maragal Venkatamuni
    2017-12-31
    https://doi.org/10.14419/ijet.v7i1.5.9161
  • Self learning, Cognitive Control, sarsa Learning.
  • Abstract

    This research work presents analysis of Modified Sarsa learning algorithm. Modified Sarsa algorithm.  State-Action-Reward-State-Action (SARSA) is an technique for learning a Markov decision process (MDP) strategy, used in for reinforcement learning int the field of artificial intelligence (AI) and machine learning (ML). The Modified SARSA Algorithm makes better actions to get better rewards.  Experiment are conducted to evaluate the performace for each agent individually. For result comparison among different agent, the same statistics were collected. This work considered varied kind of agents in different level of architecture for experiment analysis. The Fungus world testbed has been considered for experiment which is has been implemented using SwI-Prolog 5.4.6. The fixed obstructs tend to be more versatile, to make a location that is specific to Fungus world testbed environment. The various parameters are introduced in an environment to test a agent’s performance. This modified   SARSA learning algorithm can   be more suitable in EMCAP architecture.  The experiments are conducted the modified   SARSA Learning system gets   more rewards compare to existing  SARSA algorithm.

  • References

    1. [1] L. P. Kaelbling, M. L. Littman, and A. P. Moore, “Reinforcementlearning: A survey,â€Journal of Artificial Intelligence Research, vol. 4,pp. 237–285, 1996.

      [2] R. E. Bellman, “A Markov decision process,†Journal of MathematicalMechanics, vol. 6, pp. 679–684, 1957.

      [3] R. E. Bellman, Dynamic Programming. Princeton, NJ.: PrincetonUniversity Press, 1957.

      [4] R. S. Sutton, “Learning to predict by the methods of temporaldifferences,†Machine Learning, vol. 3, pp. 9–44, 1988.

      [5] C. Watkins and P. Dayan, “Q-learning,†Machine Learning, vol. 8 (3-4),pp. 9–44, 1992.

      [6] J. A. Boyan and A. W. Moore, “Generalization in reinforcement learning: Safely approximating the value function,†in Advances inNeural Information Processing Systems, pp. 369–376, 1995.

      [7] G. Rummery and M. Niranjan, “On-line Q-learning using connectionistsystems,†Cambridge University, Tech. Rep. CUED/F-INFENG/TR166, 1994.

      [8] Dr. Hamid R. Berenji David Vengerov “Learning, Cooperation, andCoordination in Multi-Agent Systemsâ€, in Proceedings of 9th IEEE Int.Conf.On Fuzzy Systems.

      [9] Deepak A. Vidhate, ParagKulkarni “Enhancement in Decision Makingwith Improved Performance by Multiagent Learning Algorithmsâ€IOSR Journal of Computer Engineering, Vol.1, issue 18, pp18-25,2016.

      [10] L. Tokarchuk 1 , J. Bigham 1 and L. Cuthbert “Fuzzy Sarsa: An approach to fuzzifying Sarsa Learning 2015.

      [11] T. Padmapriya and V. Saminadan, “Inter-cell Load Balancing technique for multi-class traffic in MIMO-LTE-A Networksâ€, International Journal of Electrical, Electronics and Data Communication (IJEEDC), ISSN: 2320- 2084, vol.3, no.8, pp. 22-26, Aug 2015.

      [12] S.V.Manikanthan and K.Baskaran “Low Cost VLSI Design Implementation of Sorting Network for ACSFD in Wireless Sensor Networkâ€, CiiT International Journal of Programmable Device Circuits and Systems,Print: ISSN 0974 – 973X & Online: ISSN 0974 – 9624, Issue : November 2011, PDCS112011008.

      [13] Rajesh, M., and J. M. Gnanasekar. "Congestion control in heterogeneous wireless ad hoc network using FRCC." Australian Journal of Basic and Applied Sciences 9.7 (2015): 698-702.

  • Downloads

  • How to Cite

    Ganesha, D., & Venkatamuni, V. M. (2017). Implementation of modified SARSA learning technique in EMCAP. International Journal of Engineering & Technology, 7(1.5), 274-278. https://doi.org/10.14419/ijet.v7i1.5.9161

    Received date: 2018-01-11

    Accepted date: 2018-01-11

    Published date: 2017-12-31