All Theses and Dissertations


Doctor of Philosophy in Computer Science


Department of Computer Science

Date of Award

Fall 2015


Dr. Sajjad Haider

Committee Member 1

Dr. Sharifullah Khan, National University of Sciences and Technology (NUST), Islamabad, Pakistan

Committee Member 2

Dr. Jawad Shamsi, National University of Computer and Emerging Sciences (NUCES), Karachi, Pakistan

Project Type


Access Type

Restricted Access

Document Version



xvi, 112


Computer Science


This dissertation addresses the problem of building collaboration in a team of autonomous agents and presents imitation learning as an effective mechanism to build this collaboration. Imitation learning involves learning from an expert by observing her demonstrating a task and then mimicking her. This mechanism requires less time and technical expertise on behalf of domain experts/ knowledge engineers and makes it convenient for them to transfer knowledge to a software agent. The research extends the idea of a demonstration to multi-human demonstrations and presents a framework of Team Learning from Demonstration (TLfD) that allows a group of human experts to train a team of agents via demonstrations. A major challenge faced by the research is to cope with the overhead of demonstrations and inconsistencies in human demonstrations. To reduce the demonstration overhead, the dissertation emphasizes on a modular approach and enables the framework to train a team of a large number of agents via smaller numbers of demonstrators. The framework learns the collaborative strategy in the form of weighted naïve Bayes model where the parameters of the model are learned from the demonstration data and its weights are optimized using Artificial Immune Systems. The framework is thoroughly evaluated in the domain of RoboCup Soccer Simulation 3D which is a promising platform for a multi-agent domain and addresses many complex real-world problems. A series of experiments were conducted using RoboCup Soccer in which the agents were trained to perform different types of tasks through TLfD framework. The experiments were started with training a single agent how to score a goal in an empty soccer field. The later experiments increased the complexity of the task and the number of agents involved. The final experiment eventually trained a full-fledged team of nine soccer players and enabled them to play soccer against other competition quality teams. A number of test matches were played against different opponent teams, and the results of the matches were evaluated on the basis of different performance and behavioral metrics. The performance metrics described how well the imitating team played in the field whereas the behavioral metrics assessed how closely they had imitated the human demonstrations. Our soccer simulation 3D team KarachiKoalas served as a benchmark to evaluate the quality of the imitating team, and the dissertation closely compared the two teams and found that the team that was trained via imitation gave comparable performance to KarachiKoalas. The results showed the effectiveness of TLfD framework and supported the idea of using imitation to build collaboration among multiple agents. However, the framework, in its current form, does not support strategy building in an incremental manner in which a naïve strategy is learned via imitation and is refined in stages. The ability to build strategies incrementally can be a crucial requirement in complex systems. In future, the framework can be extended to incorporate the ability to refine an already learned strategy via human expert's feedback.

The full text of this document is only accessible to authorized users.