Technical Papers Parallel Session-III: The PeLCoT model of software evaluation
Abstract/Description
Educational technology is developing at a never-before seen rapid pace. As a result, educators and learners are constantly being introduced to software and apps that promise to deliver ease, convenience and applicability within a functional digital environment. While it is true that options abound, these opportunities do not come without significant challenges, which include technical, financial, conceptual, logistical and practical. Institutions and organizations looking to procure and implement new software into their learning environments need first to ensure adequate technological infrastructure, sufficient budget allowances, appropriate training for instructors and learners, and strategies for successful implementation and follow-up support. Much of that, however, can only be effective once a choice of appropriate software is made. Consequently, making informed and effective decisions for meeting e-learning and personal learning environment needs while affecting educational change becomes even more significant. Making such decisions, though, can be challenging, particularly given the broad range of available products coupled frequently with a lack of understanding of what constitutes appropriate software. Making the wrong decisions can have a detrimental effect on efficacy and learner experience. The purpose of this paper is to introduce the Pedagogy-Learner-Content-Technology (PeLCoT) Model of Software Evaluation as a possible way to meet this challenge. The model is comprehensive, beginning with the introduction of potential software into an organization, and then following through a process that allows for input from various stakeholders within an organization, eventually leading to the software being rejected or accepted and implemented. The model is theoretical- and research-based, and outlines four significant areas of assessment. The paper focuses most specifically on the formative Instructor Evaluation, where quantitative and qualitative data is collected before the software goes through a trial evaluation by target learners and the remainder of the model process. One of the main benefits of the model as it is currently being used is the ‘live’ collection and sharing of data as software is evaluated. As more educators and evaluators use the PeLCoT model, more data is collected and shared openly as a way to aid educators and organizations in their decision-making processes. This method of data collection provides the authors with valuable information with regards to the viability and usability of the evaluation model, which will allow for revision and further development as needed.
Keywords
Elearning, Blended learning, Educational technology, Software, Software evaluation, Evaluation model, PeLCot
Location
C9, Aman Tower
Session Theme
Technical Papers Parallel Session-III: Software & Information Systems
Session Type
Parallel Technical Session
Session Chair
Dr. Sufian Hameed
Start Date
30-12-2017 3:00 PM
End Date
30-12-2017 3:20 PM
Recommended Citation
Amara, S. M., & Greene, K. K. (2017). Technical Papers Parallel Session-III: The PeLCoT model of software evaluation. International Conference on Information and Communication Technologies. Retrieved from https://ir.iba.edu.pk/icict/2017/2017/50
COinS
Technical Papers Parallel Session-III: The PeLCoT model of software evaluation
C9, Aman Tower
Educational technology is developing at a never-before seen rapid pace. As a result, educators and learners are constantly being introduced to software and apps that promise to deliver ease, convenience and applicability within a functional digital environment. While it is true that options abound, these opportunities do not come without significant challenges, which include technical, financial, conceptual, logistical and practical. Institutions and organizations looking to procure and implement new software into their learning environments need first to ensure adequate technological infrastructure, sufficient budget allowances, appropriate training for instructors and learners, and strategies for successful implementation and follow-up support. Much of that, however, can only be effective once a choice of appropriate software is made. Consequently, making informed and effective decisions for meeting e-learning and personal learning environment needs while affecting educational change becomes even more significant. Making such decisions, though, can be challenging, particularly given the broad range of available products coupled frequently with a lack of understanding of what constitutes appropriate software. Making the wrong decisions can have a detrimental effect on efficacy and learner experience. The purpose of this paper is to introduce the Pedagogy-Learner-Content-Technology (PeLCoT) Model of Software Evaluation as a possible way to meet this challenge. The model is comprehensive, beginning with the introduction of potential software into an organization, and then following through a process that allows for input from various stakeholders within an organization, eventually leading to the software being rejected or accepted and implemented. The model is theoretical- and research-based, and outlines four significant areas of assessment. The paper focuses most specifically on the formative Instructor Evaluation, where quantitative and qualitative data is collected before the software goes through a trial evaluation by target learners and the remainder of the model process. One of the main benefits of the model as it is currently being used is the ‘live’ collection and sharing of data as software is evaluated. As more educators and evaluators use the PeLCoT model, more data is collected and shared openly as a way to aid educators and organizations in their decision-making processes. This method of data collection provides the authors with valuable information with regards to the viability and usability of the evaluation model, which will allow for revision and further development as needed.