Introduction
Interactive technologies are in no sense an emerging field, with
the first games created in the 1950’s requiring interaction; rather interactive
technologies are an evolving idea that is revolutionising educational practice,
especially how movement is viewed in physical education. In this article I will
discuss the key underpinnings of movement education and the resulting importance
of interactive and immersive technologies to facilitate the creation of meaning.
This provides the basis on which an effective and comprehensive analysis tool
to act as a lens to assess the quality of both the Kinect sensor accessory for
the Xbox 360 system and the “Just Dance 3” gaming software. This article also
covers the key considerations required involved with the creation, structuring
and implementation of a Likert scale based checklist; resulting with a
deconstruction, critique of assessment tool after implementation and resulting
strategies for improvement.
The importance of interactive and immersive technologies in
movement education
A case study conducted by Groff, Howells & Cranmer (2010, as
cited in Ulicsak& Williamson, 2010) found that schools in the United Kingdom
were utilising‘Guitar Hero’ as a medium for multidisciplinary studies such as
English, Design Technology and Science rather than purely during a music class.
It would not be a far stretch to adopt a very similar approach in regards to
movement education in the Health and Physical Education domain of AusVELS to
potentially increase motivation and engagement in the connected units.
However before reviewing developmental interactive technologies
in movement education, an understanding the core underpinning aspect of movement
education needs to be considered; constructing ‘meaning’ by facilitating ‘peak
experiences’ (or‘flow’) Arnold (1979) suggests that peak experience is an
opportunity that offers;
·
Uniqueness
·
Transience of self
·
Total immersion
·
Euphoria in perfection
·
Control
·
Loss of fear
·
Effortlessness
Apperley
& Walsh (2012) provide context for this in interactive technologies,
suggesting that‘during gameplay, pupils draw on their gaming literacies to
accomplish difficult but motivating tasks and develop new knowledge by
navigating the complex, changing virtual environment (pp.177)’ This suggests
that students must break down the experience, recognise what is required to
successfully navigate the learning therefore constructing
meaning.
One
way of viewing this is how Squire (2006) advocates for the re-framing of video
gaming as ‘designed experience’ by making
player agency central importance, and proving an understanding what players do
with them and the meanings that players construct through these
actions (Malone,
1981; Murray, 1997, cited in Squire, 2006); or even the idea that
“Gameplay is not just an event on a screen, it is enacted in a specific
location by a person (or people) using specific technologies. (Apperley &
Walsh, 2012 pp.120)”. Therefore only by understanding this can an effective
assessment design be implemented by the
practitioner
Rationale for the assessment
design
For the purposes of research a pragmatic approach was selected
due to the need for a pragmatic paradigm by encompassing a positivist quantative
method combined with a qualitive interpretivist/constructionist method
(Mackenzie & Knipe, 2006, pp.195-197, McNaught, Rice & Tripp, 2000, pp.
1.5) This methodology allows for the researcher to be flexible in their research
design in light of the various other theories on the most effective way to
conduct research (Mackenzie& Knipe, 2006, O'Toole & Bennett
2010)
According to the Evaluation cookbook (Harvey, 1998, pp.15), a
quick comparison of evidence collection methods can be based around 5 key
areas:
·
Preparation
time: How long does it take to plan and construct the
evaluation?
·
Time/student.
How long does it take for the user?
·
Time/administration.
How much time is needed to complete the
evaluation?
·
Analysis.
How long does it take to code and analyse the
results?
·
Additional
resources. How many additional resources are needed for the
evaluation?
Relative to other data collection methods Likert scaled
checklists and questionnaires are relatively cost effective (i.e.
time/resources) with low time requirements to conduct and use, simular time
requirements for analysis and few additional resources needed to complete;
however more time is required for preparation time of the instrument when
compared to other forms of design, such as ethnography or focus groups.
Issues associated with a checklist method
Oliver (2000) raises crucial issues in relation to checklists,
predominantly that the responses are relative to the reviewer’s opinion and bias
which has vast implications as the results given may not co-relate between
multiple reviewers. Second to this is the results tend to show the experience of
the reviewer rather than the direct experience of the students using the
product, which may disregard the different learning groups. This is mirrored by
Tergan, (1998) who discusses that evaluation based checklists have three key
weaknesses:
·
Unknown
reliability and validity of criteria; which leans toward the idea that different
assessors can allocate different ratings on items of the same category,
affecting both the validity and reliability of
results.
·
Shortcomings
for assessing instructional efficacy; relating to the failure of evaluators to
take in account of learners cognitive
preconditions.
·
Lack
of tailored criteria; which relates to a generalised and inflexible structure of
the checklist to target specific aspects of the software design.
Similar to the approach taken by Fan & Le (2011) and using
‘reverse item scoring’ (Carifio & Perla, 2007), questions with opposing
meanings were selected to test the reliability of responses. For example
question (8) stated that ‘the software was difficult to use’ whereas question
(1) stated that ‘the software can be used with ease’; if a respondent strongly
agreed (5) that the software was
easy to use then the respondent should strongly disagree (1) that the software
was difficult to use. This acts as a guide to the researcher in the analysis
phase of research as to how valid and reliable the results
are.
Progressing to the layout of the assessment tool, the main
rationale behind segmenting questions into related topics aimed to improve speed
and readability of the tool for practitioners (Tergan, 1998), combined with the
tool consisting of scored sections as to gauge the strengths and weaknesses of
the hardware and software. This
also allows assessors to score the educational technology in different areas in
order to better understand its capabilities.
Progressing
into the selection of criteria for the evaluation, a meta-analysis conducted by
Peng & Crouse (2011) which found that results from ‘active video games”
(AVG’s) displayed simular results to moderate physical exertion,
these
results strengthened by a study conducted by Alasdair, Brown & Meenan,
(2013) which correlates to the findings of the meta analysis. This provided the
basis behind the inclusion of questions 10, 11, 15, 16 and 18. Secondly a study
conducted by Wastiau, Kearney & Van den Berghe, (2009, as cited in Ulicsak
& Williamson, 2010) found that 29% of respondents listed lack of information
and support (17%) and technical problems (12%) as a reason for not utilising
games in education; which prompted the inclusion of questions 1, 3, 4, 8, and
25-30. Finally Oosterhof
(2009, pp. 271-176) highlights the need for educators to construct learning
experiences which consist of both formative assessment (for learning) and
summative assessment (of learning); which provided the basis for the inclusion
of questions 2, 9 and 19-24.
Considering all the fore-mentioned issues and for the purposes
of evaluating the Kinect-Xbox 360 gaming system, a combined Likert checklist
with qualitative questions was selected to provide a deeper understanding on the
assessor’s experience to reinforce the data
collected.
Application of the assessment
tool
Following from the utilisation of the assessment tool by 10
physical education teachers of various experience levels, the following
conclusions can be made; First and foremost the assessment tool was easy to
understand for the assessors due to the segmentation of questions, as per the
recommendations of Tergan (1998). By taking this approach respondents could be
confident in the responses they had supplied, which in turn suggests the data is
more valid.
Another key attribute is that the checklist design can be
altered quickly with little impact on the results of other sections, namely
because this design requires little additional resources; but is also
accommodating of it (Harvey, 1998). There would be no reason that a qualitative
collection method could be added with little pre-planning if
desired.
However this leads to the largest concern resulting from the
assessment process is the imbalance of qualitative and quantitative data
created, as the intent was to have a pragmatic (or mixed methods) approach
rather than a positivist approach which is laden with a quantitative rather than
a holistic understanding of the software’s potential impact (see Mackenzie &
Knipe, 2006, pp.195-197 and McNaught, Rice & Tripp, 2000, pp.
1.5).
Similar to the opinions of Oliver (2000) and Tergan (1998),
validity of the responses must be called into question as some assessors stated
that they agreed that the software was easy to use (Q.1) but also agreed that
the software was difficult to use (Q.8). Also assessors did not agree that the
software has significant educational potential for furthering or deepening the
students’ understanding of movement education under the broader umbrella of
Health and Physical Education. Questioning about the ratings given revealed
that the language used was ambiguous, where it is possible to experience
difficulty with the software, but also find the software was easy to use. Also
that the meaning of “The content meets the standards of Level 9, AusVELS:
Health and Physical Education domain” as the domain is split into Health and
Physical Education (which have three sections each). To remedy this qualitative
questioning should be used in order to expand this crucial question in order to
clarify any confusion.
However not all assessors agreed that the software has
significant educational potential for furthering or deepening the students’
understanding of movement education under the broader umbrella of Health and
Physical Education as it can be just as easy to achieve the same results in an
actual setting rather than a virtual space. This prompts the need to either
investigate learner outcomes from connecting virtual spaces with actual spaces
and build a section in the Likert checklist which reflects this; or limit the
area of investigation from interactive technologies in physical education to a
more focused idea of movement education through interactive dance
education.
Carifio
& Perla (2007) advocate the need for accurate scoring (or weighting) of the
Likert results, which was deficient in the assessment design. The main rationale
for this surrounds how to respondent differentiates between the different
values, with
Carifio
& Perla (2007) asking a key question; what the difference between ‘agree’
and ‘strongly agree’? They then progress to point out that sections must be at
least 6-8 statements long in order to improve validity and reliability. This
illustrates 2 key flaws in the assessment construct which drastically need to be
assessed before the data can be considered as valid; suggesting the alternation
between a quantitative Likert question and a qualitative explanation.
Lastly it was the consensus of both the researcher and the
evaluation group that the assessment tool was too short and did cover aspects
such as cost and extra resources required with both are essential to have known
for any consideration for implementing new technologies within the school
structure. Also there should be a section on professional development as not
many staff may be proficient with the Kinect system or Just Dance 3
software.
Conclusion
It is clear in this article that it is very difficult for the
beginning researcher to develop a clear, concise and effective assessment tool
in order to collect the data they desire in light of the numerous issues
associated with movement in educational and the complex nature of method design.
In this situation a quantitative positivist Likert checklist was combined with
qualitative questioning in order to assess both the Kinect sensor for the Xbox
360 and the Just Dance 3 software. This had limited success with the main issues
surrounding accuracy of the responses, confusion over the language used in the
statements and the lack of consideration for costings or resources required to
actually implement this technology in a school
setting.
References
Apperley, T. & Walsh, C. (2012), What digital games and
literacy have in common: a
heuristic for understanding pupils' gaming literacy. Literacy,
Vol.46 pp.115–122.
Arnold, P. J. (1979).
Meaning in movement, sport and physical education.
London:
Heinemann
Brysch, C.P. Huynh, N. & Scholz, M. (2012): Evaluating
Educational Computer Games in
Geography: What is the Relationship to Curriculum Requirements?,
Journal of
Geography, Vol.111, No.3,
pp.102-112
Carifio, J., & Perla, R. J. (2007). Ten common
misunderstandings, misconceptions, persistent myths and
urban legends about Likert scales and Likert response
formats and their antidotes. Journal of Social Sciences, Vol.3,
No.3, pp.106-116.
Harvey, J. (1998), The
LTDI Evaluation Cookbook, Glasgow: Learning Technology
Dissimination Initiative.
Mackenzie, N., & Knipe, S. (2006). Research dilemmas:
Paradigms, methods and
methodology, Issues in educational research, Vol. 16, No.2, pp.
193-205.
McNaught,
C., Rice, M., & Tripp, D. (2000). Handbook for learning-centred
evaluation of
computer-facilitated learning projects in higher education. Murdoch
University.
Oliver, M. (2000) An introduction to the Evaluation of Learning
Technology. Educational
Technology & Society, Vol. 3, No.4, pp. 20-30.
Oosterhof,
A (2009), Integrating Assessments into Instruction, Developing
and Using Classroom Assessments
(4th ed.) Pearson Education pp.
271-276
O'Toole, J. & Bennett, D. (2010). Educational research:
Creative Thinking & Doing. South
Melbourne: Oxford University Press
Peng, J.-H. Lin, & Crouse J. (2011), “Is playing exergames
really exercising? A meta-analysis of
energy expenditure in active video games,” Cyberpsychology, Behavior, and Social
Networking, vol. 14, no. 11, pp. 681–688,
Si Fan & Quynh Lê (2011) Developing a Valid and Reliable
Instrument to Evaluate Users’
Perception of Web-Based Learning in an Australian University Context.
MERLOT
Journal of Online Learning and Teaching. Vol. 7, No.
3
Squire, K. (2006). From content to context: Videogames as
designed experience.
Educational researcher, 35(8),
19-29.
Tergan,
S. O. (1998). Checklists for the evaluation of educational software:
critical review
and prospects. Innovations in education and training international,
Vol.35,
No.1, pp.9-20.
Thin, A. Brown, C. & Meenan, P. (2013) “User Experiences
While Playing Dance Based Exergames and
the Influence of Different Body Motion Sensing
Technologies,”International Journal of Computer Games Technology,
vol.
Wakamatsu K. (2011) From Pop Culture to Sophisticated Art:
Helping K-12 Students Bridge
the Gap, Journal of Dance Education, Vol.11, No.4,
pp.129-133
Interactive technologies are in no sense an emerging field, with
the first games created in the 1950’s requiring interaction; rather interactive
technologies are an evolving idea that is revolutionising educational practice,
especially how movement is viewed in physical education. In this article I will
discuss the key underpinnings of movement education and the resulting importance
of interactive and immersive technologies to facilitate the creation of meaning.
This provides the basis on which an effective and comprehensive analysis tool
to act as a lens to assess the quality of both the Kinect sensor accessory for
the Xbox 360 system and the “Just Dance 3” gaming software. This article also
covers the key considerations required involved with the creation, structuring
and implementation of a Likert scale based checklist; resulting with a
deconstruction, critique of assessment tool after implementation and resulting
strategies for improvement.
The importance of interactive and immersive technologies in
movement education
A case study conducted by Groff, Howells & Cranmer (2010, as
cited in Ulicsak& Williamson, 2010) found that schools in the United Kingdom
were utilising‘Guitar Hero’ as a medium for multidisciplinary studies such as
English, Design Technology and Science rather than purely during a music class.
It would not be a far stretch to adopt a very similar approach in regards to
movement education in the Health and Physical Education domain of AusVELS to
potentially increase motivation and engagement in the connected units.
However before reviewing developmental interactive technologies
in movement education, an understanding the core underpinning aspect of movement
education needs to be considered; constructing ‘meaning’ by facilitating ‘peak
experiences’ (or‘flow’) Arnold (1979) suggests that peak experience is an
opportunity that offers;
·
Uniqueness
·
Transience of self
·
Total immersion
·
Euphoria in perfection
·
Control
·
Loss of fear
·
Effortlessness
Apperley
& Walsh (2012) provide context for this in interactive technologies,
suggesting that‘during gameplay, pupils draw on their gaming literacies to
accomplish difficult but motivating tasks and develop new knowledge by
navigating the complex, changing virtual environment (pp.177)’ This suggests
that students must break down the experience, recognise what is required to
successfully navigate the learning therefore constructing
meaning.
One
way of viewing this is how Squire (2006) advocates for the re-framing of video
gaming as ‘designed experience’ by making
player agency central importance, and proving an understanding what players do
with them and the meanings that players construct through these
actions (Malone,
1981; Murray, 1997, cited in Squire, 2006); or even the idea that
“Gameplay is not just an event on a screen, it is enacted in a specific
location by a person (or people) using specific technologies. (Apperley &
Walsh, 2012 pp.120)”. Therefore only by understanding this can an effective
assessment design be implemented by the
practitioner
Rationale for the assessment
design
For the purposes of research a pragmatic approach was selected
due to the need for a pragmatic paradigm by encompassing a positivist quantative
method combined with a qualitive interpretivist/constructionist method
(Mackenzie & Knipe, 2006, pp.195-197, McNaught, Rice & Tripp, 2000, pp.
1.5) This methodology allows for the researcher to be flexible in their research
design in light of the various other theories on the most effective way to
conduct research (Mackenzie& Knipe, 2006, O'Toole & Bennett
2010)
According to the Evaluation cookbook (Harvey, 1998, pp.15), a
quick comparison of evidence collection methods can be based around 5 key
areas:
·
Preparation
time: How long does it take to plan and construct the
evaluation?
·
Time/student.
How long does it take for the user?
·
Time/administration.
How much time is needed to complete the
evaluation?
·
Analysis.
How long does it take to code and analyse the
results?
·
Additional
resources. How many additional resources are needed for the
evaluation?
Relative to other data collection methods Likert scaled
checklists and questionnaires are relatively cost effective (i.e.
time/resources) with low time requirements to conduct and use, simular time
requirements for analysis and few additional resources needed to complete;
however more time is required for preparation time of the instrument when
compared to other forms of design, such as ethnography or focus groups.
Issues associated with a checklist method
Oliver (2000) raises crucial issues in relation to checklists,
predominantly that the responses are relative to the reviewer’s opinion and bias
which has vast implications as the results given may not co-relate between
multiple reviewers. Second to this is the results tend to show the experience of
the reviewer rather than the direct experience of the students using the
product, which may disregard the different learning groups. This is mirrored by
Tergan, (1998) who discusses that evaluation based checklists have three key
weaknesses:
·
Unknown
reliability and validity of criteria; which leans toward the idea that different
assessors can allocate different ratings on items of the same category,
affecting both the validity and reliability of
results.
·
Shortcomings
for assessing instructional efficacy; relating to the failure of evaluators to
take in account of learners cognitive
preconditions.
·
Lack
of tailored criteria; which relates to a generalised and inflexible structure of
the checklist to target specific aspects of the software design.
Similar to the approach taken by Fan & Le (2011) and using
‘reverse item scoring’ (Carifio & Perla, 2007), questions with opposing
meanings were selected to test the reliability of responses. For example
question (8) stated that ‘the software was difficult to use’ whereas question
(1) stated that ‘the software can be used with ease’; if a respondent strongly
agreed (5) that the software was
easy to use then the respondent should strongly disagree (1) that the software
was difficult to use. This acts as a guide to the researcher in the analysis
phase of research as to how valid and reliable the results
are.
Progressing to the layout of the assessment tool, the main
rationale behind segmenting questions into related topics aimed to improve speed
and readability of the tool for practitioners (Tergan, 1998), combined with the
tool consisting of scored sections as to gauge the strengths and weaknesses of
the hardware and software. This
also allows assessors to score the educational technology in different areas in
order to better understand its capabilities.
Progressing
into the selection of criteria for the evaluation, a meta-analysis conducted by
Peng & Crouse (2011) which found that results from ‘active video games”
(AVG’s) displayed simular results to moderate physical exertion,
these
results strengthened by a study conducted by Alasdair, Brown & Meenan,
(2013) which correlates to the findings of the meta analysis. This provided the
basis behind the inclusion of questions 10, 11, 15, 16 and 18. Secondly a study
conducted by Wastiau, Kearney & Van den Berghe, (2009, as cited in Ulicsak
& Williamson, 2010) found that 29% of respondents listed lack of information
and support (17%) and technical problems (12%) as a reason for not utilising
games in education; which prompted the inclusion of questions 1, 3, 4, 8, and
25-30. Finally Oosterhof
(2009, pp. 271-176) highlights the need for educators to construct learning
experiences which consist of both formative assessment (for learning) and
summative assessment (of learning); which provided the basis for the inclusion
of questions 2, 9 and 19-24.
Considering all the fore-mentioned issues and for the purposes
of evaluating the Kinect-Xbox 360 gaming system, a combined Likert checklist
with qualitative questions was selected to provide a deeper understanding on the
assessor’s experience to reinforce the data
collected.
Application of the assessment
tool
Following from the utilisation of the assessment tool by 10
physical education teachers of various experience levels, the following
conclusions can be made; First and foremost the assessment tool was easy to
understand for the assessors due to the segmentation of questions, as per the
recommendations of Tergan (1998). By taking this approach respondents could be
confident in the responses they had supplied, which in turn suggests the data is
more valid.
Another key attribute is that the checklist design can be
altered quickly with little impact on the results of other sections, namely
because this design requires little additional resources; but is also
accommodating of it (Harvey, 1998). There would be no reason that a qualitative
collection method could be added with little pre-planning if
desired.
However this leads to the largest concern resulting from the
assessment process is the imbalance of qualitative and quantitative data
created, as the intent was to have a pragmatic (or mixed methods) approach
rather than a positivist approach which is laden with a quantitative rather than
a holistic understanding of the software’s potential impact (see Mackenzie &
Knipe, 2006, pp.195-197 and McNaught, Rice & Tripp, 2000, pp.
1.5).
Similar to the opinions of Oliver (2000) and Tergan (1998),
validity of the responses must be called into question as some assessors stated
that they agreed that the software was easy to use (Q.1) but also agreed that
the software was difficult to use (Q.8). Also assessors did not agree that the
software has significant educational potential for furthering or deepening the
students’ understanding of movement education under the broader umbrella of
Health and Physical Education. Questioning about the ratings given revealed
that the language used was ambiguous, where it is possible to experience
difficulty with the software, but also find the software was easy to use. Also
that the meaning of “The content meets the standards of Level 9, AusVELS:
Health and Physical Education domain” as the domain is split into Health and
Physical Education (which have three sections each). To remedy this qualitative
questioning should be used in order to expand this crucial question in order to
clarify any confusion.
However not all assessors agreed that the software has
significant educational potential for furthering or deepening the students’
understanding of movement education under the broader umbrella of Health and
Physical Education as it can be just as easy to achieve the same results in an
actual setting rather than a virtual space. This prompts the need to either
investigate learner outcomes from connecting virtual spaces with actual spaces
and build a section in the Likert checklist which reflects this; or limit the
area of investigation from interactive technologies in physical education to a
more focused idea of movement education through interactive dance
education.
Carifio
& Perla (2007) advocate the need for accurate scoring (or weighting) of the
Likert results, which was deficient in the assessment design. The main rationale
for this surrounds how to respondent differentiates between the different
values, with
Carifio
& Perla (2007) asking a key question; what the difference between ‘agree’
and ‘strongly agree’? They then progress to point out that sections must be at
least 6-8 statements long in order to improve validity and reliability. This
illustrates 2 key flaws in the assessment construct which drastically need to be
assessed before the data can be considered as valid; suggesting the alternation
between a quantitative Likert question and a qualitative explanation.
Lastly it was the consensus of both the researcher and the
evaluation group that the assessment tool was too short and did cover aspects
such as cost and extra resources required with both are essential to have known
for any consideration for implementing new technologies within the school
structure. Also there should be a section on professional development as not
many staff may be proficient with the Kinect system or Just Dance 3
software.
Conclusion
It is clear in this article that it is very difficult for the
beginning researcher to develop a clear, concise and effective assessment tool
in order to collect the data they desire in light of the numerous issues
associated with movement in educational and the complex nature of method design.
In this situation a quantitative positivist Likert checklist was combined with
qualitative questioning in order to assess both the Kinect sensor for the Xbox
360 and the Just Dance 3 software. This had limited success with the main issues
surrounding accuracy of the responses, confusion over the language used in the
statements and the lack of consideration for costings or resources required to
actually implement this technology in a school
setting.
References
Apperley, T. & Walsh, C. (2012), What digital games and
literacy have in common: a
heuristic for understanding pupils' gaming literacy. Literacy,
Vol.46 pp.115–122.
Arnold, P. J. (1979).
Meaning in movement, sport and physical education.
London:
Heinemann
Brysch, C.P. Huynh, N. & Scholz, M. (2012): Evaluating
Educational Computer Games in
Geography: What is the Relationship to Curriculum Requirements?,
Journal of
Geography, Vol.111, No.3,
pp.102-112
Carifio, J., & Perla, R. J. (2007). Ten common
misunderstandings, misconceptions, persistent myths and
urban legends about Likert scales and Likert response
formats and their antidotes. Journal of Social Sciences, Vol.3,
No.3, pp.106-116.
Harvey, J. (1998), The
LTDI Evaluation Cookbook, Glasgow: Learning Technology
Dissimination Initiative.
Mackenzie, N., & Knipe, S. (2006). Research dilemmas:
Paradigms, methods and
methodology, Issues in educational research, Vol. 16, No.2, pp.
193-205.
McNaught,
C., Rice, M., & Tripp, D. (2000). Handbook for learning-centred
evaluation of
computer-facilitated learning projects in higher education. Murdoch
University.
Oliver, M. (2000) An introduction to the Evaluation of Learning
Technology. Educational
Technology & Society, Vol. 3, No.4, pp. 20-30.
Oosterhof,
A (2009), Integrating Assessments into Instruction, Developing
and Using Classroom Assessments
(4th ed.) Pearson Education pp.
271-276
O'Toole, J. & Bennett, D. (2010). Educational research:
Creative Thinking & Doing. South
Melbourne: Oxford University Press
Peng, J.-H. Lin, & Crouse J. (2011), “Is playing exergames
really exercising? A meta-analysis of
energy expenditure in active video games,” Cyberpsychology, Behavior, and Social
Networking, vol. 14, no. 11, pp. 681–688,
Si Fan & Quynh Lê (2011) Developing a Valid and Reliable
Instrument to Evaluate Users’
Perception of Web-Based Learning in an Australian University Context.
MERLOT
Journal of Online Learning and Teaching. Vol. 7, No.
3
Squire, K. (2006). From content to context: Videogames as
designed experience.
Educational researcher, 35(8),
19-29.
Tergan,
S. O. (1998). Checklists for the evaluation of educational software:
critical review
and prospects. Innovations in education and training international,
Vol.35,
No.1, pp.9-20.
Thin, A. Brown, C. & Meenan, P. (2013) “User Experiences
While Playing Dance Based Exergames and
the Influence of Different Body Motion Sensing
Technologies,”International Journal of Computer Games Technology,
vol.
Wakamatsu K. (2011) From Pop Culture to Sophisticated Art:
Helping K-12 Students Bridge
the Gap, Journal of Dance Education, Vol.11, No.4,
pp.129-133