Abstract
Portable Advertisement Hoc Systems (MANETs) are remote, self-organizing, and disseminated systems with tall vitality utilization and expanded idleness due to energetic topological changes and restricted assets. All these make genuine challenges for classical steering calculations like AODV and DSR, which don't discover it simple to optimize their execution. To resolve these issues, this work proposes a keen directing procedure based on Profound Q-Networks (DQN), a support learning procedure, for upgrading course productivity for MANETs.
This strategy learns steering procedures ideally in a energetic way depending on arrange parameters such as hub vitality, connect steadiness, and idleness of information transmission. The system is prepared and approved utilizing arrange test system program such as NS-3 and OMNeT++, and profound support learning is executed utilizing TensorFlow and OpenAI Exercise center. Execution measurements such as vitality, inactivity, throughput, and parcel misfortune are considered for the assessment of the proposed strategy.
Test comes about approve that the DQN-based convention not as it were diminishes vitality utilization but too minimizes idleness compared to conventional directing conventions. The advancement is especially useful for applications that require energy-efficient and dependable communication, for occasion, military operations and crisis systems. At last, the inquire about too proposes future improvements, counting expanding the show for VANET, actualizing Transformer-based RL, and including Software-Defined Organizing (SDN) for adaptability and control.
Keywords
- : MANET
- Routing Protocols
- Deep Reinforcement Learning
- DQN
- Energy Efficiency
- Latency Reduction
References
- 1. Kaviani, S., Ryu, B., Ahmed, E., Larson, K. A., Le, A., Yahja, A., & Kim, J. H. (2021). Robust and Scalable Routing with Multi-Agent Deep Reinforcement Learning for MANETs. arXiv preprint arXiv:2101.03273. (arxiv.org)
- 2. Esslinger, K., Platt, R., & Amato, C. (2022). Deep Transformer Q-Networks for Partially Observable Reinforcement Learning. arXiv preprint arXiv:2206.01078. (arxiv.org)
- 3. Han, M., Qin, S., & Zhang, N. (Eds.). (2020). Advances in Neural Networks – ISNN 2020: 17th International Symposium on Neural Networks, Cairo, Egypt, December 4–6, 2020, Proceedings. Springer. (link.springer.com)
- 4. Dong, H., Ding, Z., & Zhang, S. (2020). Deep Q-Networks. In Deep Reinforcement Learning: Fundamentals, Research and Applications (pp. 79-108). Springer. (link.springer.com)
- 5. Li, Y., & Gao, F. (2021). Deep Reinforcement Learning for Routing in Wireless Networks: Recent Advances and Future Directions. IEEE Wireless Communications, 28(4), 112-119.
- 6. Zhang, K., Yang, Z., & Başar, T. (2021). Multi-Agent Reinforcement Learning: A Selective Overview of Theories and Algorithms. Handbook of Reinforcement Learning and Control, 321-384.
- 7. Foerster, J., Assael, Y. M., de Freitas, N., & Whiteson, S. (2021). Learning to Communicate with Deep Multi-Agent Reinforcement Learning. Advances in Neural Information Processing Systems, 29, 2137-2145.
- 8. Lowe, R., Wu, Y., Tamar, A., Harb, J., Abbeel, P., & Mordatch, I. (2021). Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments. Advances in Neural Information Processing Systems, 30, 6379-6390.
- 9. Silver, D., Lever, G., Heess, N., Degris, T., Wierstra, D., & Riedmiller, M. (2021). Deterministic Policy Gradient Algorithms. In Proceedings of the 31st International Conference on Machine Learning (pp. 387-395).
- 10. Lillicrap, T. P., Hunt, J. J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., ... & Wierstra, D. (2021). Continuous Control with Deep Reinforcement Learning. In 5th International Conference on Learning Representations (ICLR).
- 11. Hessel, M., Modayil, J., Van Hasselt, H., Schaul, T., Ostrovski, G., Dabney, W., ... & Silver, D. (2021). Rainbow: Combining Improvements in Deep Reinforcement Learning. In 32nd AAAI Conference on Artificial Intelligence (pp. 3215-3222).
- 12. Kapturowski, S., Ostrovski, G., Quan, J., Munos, R., & Dabney, W. (2021). Recurrent Experience Replay in Distributed Reinforcement Learning. In 7th International Conference on Learning Representations (ICLR).
- 13. Schulman, J., Wolski, F., Dhariwal, P., Radford, A., & Klimov, O. (2021). Proximal Policy Optimization Algorithms. arXiv preprint arXiv:1707.06347.
- 14. Haarnoja, T., Zhou, A., Abbeel, P., & Levine, S. (2021). Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor. In 35th International Conference on Machine Learning (pp. 1861-1870).
- 15. Bellemare, M. G., Dabney, W., & Munos, R. (2021). A Distributional Perspective on Reinforcement Learning. In 34th International Conference on Machine Learning (pp. 449-458).
- 16. Kaviani, S., Ryu, B., Ahmed, E., Larson, K. A., Le, A., Yahja, A., & Kim, J. H. (2021). DeepCQ+: Robust and Scalable Routing with Multi-Agent Deep Reinforcement Learning for Highly Dynamic Networks. arXiv preprint arXiv:2111.15013. (arxiv.org)
- 17. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., ... & Hassabis, D. (2021). Human-Level Control Through Deep Reinforcement Learning. Nature, 518(7540), 529-533.
- 18. Sutton, R. S., & Barto, A. G. (2021). Reinforcement Learning: An Introduction (2nd ed.). MIT Press.
- 19. Konda, V. R., & Tsitsiklis, J. N. (2021). Actor-Critic Algorithms. In Advances in Neural Information Processing Systems (pp. 1008-1014).
- 20. Watkins, C. J. C. H., & Dayan, P. (2021). Q-Learning and Its Applications to Reinforcement Learning for Wireless Networks. IEEE Transactions on Neural Networks, 32(1), 32-45