AI Agent for Tetires using Deep-Q Learning, NEAT & A* Algorithm
- Tech Stack: PyTorch, PyGame
- GitHub URL NEAT: Link
- Github URL DQL: Project Link
Conducted a comprehensive comparative study of AI agents in the Tetris game domain using three distinct algorithms: Deep Q-Learning (DQL), A* Algorithm, and NeuroEvolution of Augmenting Topologies (NEAT). The study involved training and evaluating the performance of each agent based on metrics such as game score, decision-making speed, and adaptability to increasing game complexity. Each algorithm was tailored to fit the dynamics of the Tetris environment, with DQL focusing on reinforcement learning, A* leveraging heuristic-based search, and NEAT utilizing evolutionary strategies to optimize neural networks.
The project successfully identified the strengths and limitations of each approach, showcasing DQL's ability to adapt through reinforcement, A*'s precision in predefined states, and NEAT's flexibility in evolving neural architectures. The findings highlighted the best-performing agent across varying levels of gameplay difficulty, offering insights into the suitability of different AI techniques for real-time decision-making and strategy optimization in game environments.