The Many Faces of TPACK/TPACK Measures

From Wikibooks, open books for an open world
Jump to navigation Jump to search

Why should we measure TPACK?[edit | edit source]

by Ahmet Ilci

Over the years, several instruments have been developed for measuring constructs like teachers’ technology skills, technology integration, access to technology, and teachers’ attitudes about technology (Becker & Riel, 2000; Keller, Bonk, & Hew, 2005; Knezek & Christiansen, 2004). Conducting research about development and measuring of TPACK is an important and hard challenge. Since TPACK is complicated construct and comprised of many components. Measuring the effectiveness of TPACK depends on the relationships of these components with each other (Koehler,Shin and Mishra, 2011). As a result of complexity of TPACK, getting reliable and valid results after assessing the TPACK is an important process. Moreover, researchers should know how to create and evaluate effective measurement tools to assess the TPACK because of its sophisticated nature. As a result of these, development and evaluating process of TPACK should be should be clear and comprehensible. In this chapter, review of various techniques of measuring TPACK are presented and specifically addressing the following two questions: (1) What types of measures are used in the TPACK literature? And (2) What do these measurement tools’ developing and applying process and strategies? To collect the data, literature review was conducted by using METU library. As a result of data collecting process, ten article was found . These articles are the main articles that focus on types of measurement tools and development and evaluation process of these tools. These chapter are comprised of four main part. Effective measurement tools for measuring the TPACK include both self-reporting and performance based measures. Protecting sensitivity of preservice teachers towards to the pedagogy, technology and content domain should be consider by TPACK measures (Abbitt, 2011). As a result of these, researchers should use different measurement tools for assessing the TPACK because of changeable characteristics of teachers’ knowledge and multiple dimensions of TPACK. Generally , surveys that are developed for measuring of TPACK based on seven subscales which are formed the TPACK model (Punya & Koehler, 2006) . First chapter begins with the question of “Why should we measure TPACK ?” and points out the importance of measuring TPACK. The second part of the chapter is about the measurement tools of TPACK. Generally , five main measurement tools which are Self-Report Measures, Open-Ended Questionnaires ,Performance Assessments, Interviews and Observations were focused on and there are detailed explanations of them. The third part of the chapter focuses on the development phase of TPACK measurement tool. In this chapter, there are detailed description of strategies and steps of development of TPACK measurement tool. The fourth part of this chapter is about the challenges of measurement of TPACK. The challenges that are faced during the development and implementation period of TPACK is clarified deeply.

Measuring Instruments for TPACK[edit | edit source]

According to the this classification that is below there are five main measuring techniques used for assess the TPACK (Koehler , Shin and Mishra (2011) 1)Self-Report Measures 2)Open-Ended Questionnaires 3)Performance Asessments 4)Interviews 5)Observations

Self Reporting Measures[edit | edit source]

Self report measures are the most popular TPACK measuring techniques in the literature. 23 out of 100 articles and notifications researchers used self report measure to measure the TPACK (Koehler , Shin and Mishra ,2011). Self report measures are measures in which respondents are asked to questions about how they integrate technology into teaching for rating themselves. In most research, self-report measures are depend on main domains of TPACK (Koehler , Shin and Mishra ,2011). While creating and developing self-reporting measures , researchers may add or remove some domains of TPACK for providing better results. On the other hand, some researchers used another survey as a basis for their survey. For example, Archambault and Crippen (2009) developed study of Koehler and Mishra (2005) and have made it more functional, developed and robust survey that extends to general contexts multiple content areas, and multiple approaches of professional development.

Performance-Based TPACK Measures[edit | edit source]

Performance assessments is another most popular measuring technique in the literature (Koehler , Shin and Mishra ,2011). In performance assessments, participants evaluate themselves by creating a product or completing the task at the end of the activity. Main idea for using the Performance based measures to assess TPACK knowledge is that students’ work is the results of teachers ‘ instructional design and planning project. As a result of this, assessing the students’ work give us evidence of knowledge of pre-service teachers’ TPACK domain (Abbitt, 2011). Some TPACK performance assessments ask participants to create portfolios or reflective journals (Suharwoto , 2006), while others consists of scenario or problem based questions for solutions (Graham, Tripp & Wentworth, 2009). Products that created at the end of the task are evaluated by experts or researchers by using some critera. Harris, Grandgenett and Hofer (2010) developed a rubric to assess TPACK knowledge of preservice teachers. They evaluate teachers’ products by using these rubric. In these studies , rubrics used for evaluating of project plan or a lesson. According to the results of these studies, Although rubrics are not tested by lesson plans that are created by experienced educators, it is the most appropriate for use with preservice teachers’ lesson plan (Abbitt, 2011).

Open-Ended Questionnaires[edit | edit source]

13 out of 100 the articles and notifications , researchers used open-ended questionnaires to assess TPACK knowledge of pre-service teachers (Koehler , Shin and Mishra ,2011). Open-Ended Questionnaires ask preservice teachers different types of questions to write their teaching experiences with TPACK in their technology courses. So and Kim (2009) asked teachers ”What do you see as the main strength and weakness of integrating ICT tolls into your PBL lessons” in their research. Hence , In the creation and development process of open-ended questionnaires, questions should be created for assessing overall experience of teachers and teachers should judge their knowledge by using this questions. Coding and analyzing of open-ended questionnaires is very challenging so it is not preferred mostly by researchers.

Interviews[edit | edit source]

Interviews are the another method to measure the TPACK knowledge of the participants. In the interviews, questions about the TPACK knowledge asked to the participants and their voice is recorded for coding later. Ozgun and Koca (2009) asked participants the advantages and disadvantages of the using calculator in the learning environment. As a result, in the interviews, participants are asked to advantages , disadvantages of technological tools, evaluating of TPACK domains or their perceptions of TPACK can be asked.

Observations[edit | edit source]

Observations is the another effective measuring technique of TPACK. Researches examines how knowledge level of TPACK changes over the time by using note –taking and video recording during the observation (Koehler , Shin and Mishra ,2011) .During the observation, researchers observe the classroom and taking notes about the how teacher integrate technology in his/her teaching process.

Development Phase of TPACK Measurement Tools[edit | edit source]

Different methods can be used to create TPACK survey and these methods can be comprised of different phases. One of these method and its phases can be seen in this part of the chapter. Mainly 3 phases which are ; 1)Item Pool 2)Validity And Reliability Analysis 3)Translation of the TPACK Survey

For developing TPACK measurement tools systematic and step-by step approach is applied. Firstly, items gathered by subject matter expers in main three TPACK domains which are technology , content and pedagogy. Then, KMO and BTS tests are conducted whether items are proper for exploratory analysis or not. If analysis yielded required coefficient alpha values , exploratory factors analysis is conducting for deciding items measure correctly the what they intend to measure or not. If factor analysis results are the succesfull , then discriminant reliability analysis and test-rest analysis are conducted for gathering more reliable results. Lastly , if all results have success in providing desired values, at the last phase, survey translated and counter translated for clearing up grammatical errors.

Item Pool[edit | edit source]

Theoratical framework and related literature are the main components of these phases (Sahin , 2011). First step is reviewing related literature that assessing technology use in educational settings. The critical point is that gathering instruments that intented to measure self-assessments of preservice teachers’ TPACK domains not their attitudes (Schmit et al., 2009). Instead of scanning related literature , workshops can be organized for preparing items for the scale. The aim of organizing workshops for determining indicators and components is that reaching a lot of people from faculty members in educational technology field and benefited from their ideas and experiences (Yurdakul ,2012). At the end of searching related literature and workshops, all data can be stored in different ways. As a results of data gathering process, all competencies and indicators are written in booklet. The booklet is used to create an item pool (Yurdakul, 2012). One of another methods for creating the survey items is the Dillman’s methodology. According to the Dillman’s (2007) methodology, items were created by the first author and then reviewed by two knowledgeable technology education experts who have extensive experience with online teaching. After conducting survey, piloting the survey is starting by using the Think-Aloud Pilot project. Although content validity can be established by having the instrument reviewed by experts, construct validity can begin to be verified by using a think-aloud strategy with interview participants while they read and answer survey items (Dillman, 2007; Fowler, 2002).

Validity and Reliability Analysis[edit | edit source]

After Item pool process, all gathering data can be sent to experts for approval or applied some statistical process to yield validity and reliability issues . Exploratory and confirmatory factors analyses are applied for validity analyses and cronbach alpha coefficient are calculated for the reliability analyses. Exploratory factor analysis is used for inspecting factor validity of seven subscales. The aim of implementing of these factor analysis is that how survey well discriminate teacher with high competency and teachers with low competency (Yurdakul et. all , 2009).

Firstly, Kaiser-Meyer-Olkin and then Barlett’s Test of Sphericity are applied to understand characteristic of data set are appropriate for Exploratory Factor Analysis (Sahin, 2011). Factor analysis is applied on the items within each subscale and important step for deciding patterns fit well into TPACK constructs (Schmidt , 2009). The researchers used the Kaiser-Guttman rule (which states that factors with Eigen values greater than 1 should be accepted) to identify a number of factors and their constitution based on the data analysis. Since characteristics of the data set are appropriate for factor Analysis , factor analysis are applied. After validity measurements, reliability phases starts.

Internal consistency reliability is applied on measurements.Researchers calculated internal consistency value for each TPACK domain by using cronbach alpha techniques. The Cronbach’s alpha internal consistency coefficient was calculated to test the consistency of the items of the scale, and the test-retest reliability was calculated to determine the consistency of the scale over time (Yurdakul et. all ,2011) . If internal consistency value is higher than the .70 are considered as a good , and when the value is closer to the 1.00, it is considered very good (Fraenkel & Wallen, 2003). Instead of applying internal consistency reliability , test- retest reliability can be applied on. At the end of the reliability phase, test-retest reliability is calculated in order to determine the consistency of measurement of the performance. In this phase, scale form was applied to preservice teacher twice in three weeks. Relation in two applications is calculated by Pearson Product Moment Correlation Coefficient (Yurdakul et.all, 2009). And then, problematic items eliminated from survey. Since , results of cronbach alpha and item correlations is yielded proper scores , items are correlated each others and survey is reliable (Sahin, 2011).

Survey Translation[edit | edit source]

Phase 3 involves translation of the survey into English. These phase is comprised of translation and counter-translation. Firstly, the survey is translated from original language to English independently by the authors and professional translators. English translation is back-translated to original language by a bilingual person for crosschecking. Then, the two translated forms are compared each others and modifications are made accordingly. The structure or the meaning of the scale items is not changed (Sahin, 2011).

Challenges of measuring TPACK[edit | edit source]

Researchers encounter some problems while they are measuring the TPACK of preservice teachers. They face different problems in each different measuring tool of TPACK. The main two problems living during the measurement process are that understanding the effects of teachers’ domain knowledge on their current teaching practices and reliability, validity concerns of TPACK measurement methods (Abbitt ,2011). As a result of these, researchers tend to try different methods to measure TPACK because of the fact that dynamic characteristic of preservice teacher education.

Researchers used different techniques for overcoming reliability and validity issues. Sahin (2011) used exploratory factor analysis is used for inspecting factor validity of seven subscales of TPACK. Firstly, Kaiser-Meyer-Olkin and then Barlett’s Test of Sphericity are applied to understand characteristic of data set are appropriate for Exploratory Factor Analysis . Another approach for yielding reliability issues is that Yurdakul et.al (2011) applied internal consistency on measurements. If internal consistency value is higher than the .70 are considered as a good , and when the value is closer to the 1.00, it is considered very good (Fraenkel & wallen, 2003 ; Gay, Milss & Airasian, 2000). Lastly, at the end of the reliability phase, test-retest reliability is calculated in order to determine the consistency of measurement of the performance. In this phase, scale form was applied to preservice teacher twice in three weeks. Relation in two applications is calculated by Pearson Product Moment Correlation Coefficient .

While through history, mostly qualitative methods were used to identify and define the TPACK, but Mishra and Koehler (2005) conducted quantiative studies to measure the participants’ time ,efforts and perceptions of learning experience. According to the result of this study, teachers’ knowledge show changeable characteristics and context in which activity happens, as well as interactions among students within this context (Abbitt, 2011).

Furthermore, Koehler, Misra and Yahya (2007), tried to measure TPACK by using the discourse analysis process. The research process comprised of mainly content analysis techniques such as taking notes from group discussions, e-mail records between group members and surveys conducted through whole semester (Abbitt, 2011). However, possibility of subjectivity and bias in coding are the main challenges of this study as researchers approves them. In performance assessments, participants evaluate themselves by creating a product or completing the task at the end of the activity. Some TPACK performance assessments ask participants to create portfolios or reflective journals (Suharwoto , 2006), while others consists of scenario or problem based questions for solutions (Graham, Tripp & Wentworth, 2009). Although performance assessments are the strong way to measure the data, evaluating themselves also causes for bias. Also Schmidt (2009) states that this approach takes long time and it focused just unique context. Focusing unique context does not give us enough data when we compare it with the prior findings. Lastly, according to the Abbitt (2011), data collection used in this method , is not reflects whole knowledge of students and perceptions of TPACK domains. Further, getting enough data by e-mail recordings and other some writings among students is not practical way.

In brief, reliability and validity issue are one of the main problems that researchers encounter during the measurement process. For overcoming this problem, researchers used different analyzing techniques such as test-retest reliability or pearson product moment correlation. Another problems are the changeable characteristics of teachers‘ knowledge and bias in evaluation and coding process. During the measurement process, these issues causes for the problems. For solving this problem, researchers can use different methods to measuring and evaluating process of the TPACK.

References[edit | edit source]

Kabakci Yurdakul, I., Odabasi, H., Kilicer, K., Coklar, A., Birinci, G., & Kurt, A. (2012). The development, validity and reliability of TPACK-deep: A technological pedagogical content knowledge scale. Computers & Education, 58(3), 964-977. doi:10.1016/j.compedu.2011.10.012

Schmidt, D. A., Baran, E., Thompson, A. D., Mishra, P., Koehler, M. J., & Shin, T. S. (2009). Technological Pedagogical Content Knowledge (TPACK): The Development and Validation of an Assessment Instrument for Preservice Teachers. Journal Of Research On Technology In Education, 42(2), 123-149. Harris, J., Grandgenett, N. & Hofer, M. (2010). Testing a TPACK-Based Technology Integration Assessment Rubric. In D. Gibson & B. Dodge (Eds.), Proceedings of Society for Information Technology & Teacher Education International Conference 2010 (pp. 3833-3840). Chesapeake, VA: AACE. Abbitt, J. (2011). Measuring Technological Pedagogical Content Knowledge in Preservice Teacher Education: A Review of Current Methods and Instruments. Journal Of Research On Technology In Education, 43(4), 281-300. Archambault, L., & Crippen, K. (2009). Examining TPACK among K-12 Online Distance Educators in the United States. Contemporary Issues In Technology And Teacher Education (CITE Journal), 9(1), 71-88. Sahin, I. (2011). Development of Survey of Technological Pedagogical and Content Knowledge (TPACK). Turkish Online Journal Of Educational Technology - TOJET, 10(1), 97-105. Harris, J. B., & Hofer, M. J. (2011). Technological Pedagogical Content Knowledge (TPACK) in Action: A Descriptive Study of Secondary Teachers' Curriculum-Based, Technology-Related Instructional Planning. Journal Of Research On Technology In Education, 43(3), 211-229. Koehler, M. J., Shin, T.S., & Mishra, P. (2011). How do we measure TPACK? Let me count the ways. In R. N. Ronau, C.R. Rakes, & M. L. Niess (Eds.). Educational Technology, Teacher Knowledge, and Classroom Impact: A Research Handbook on Frameworks and Approaches. Information Science Reference, Hershey PA. Mishra, P., & Koehler, M. J. (2006). Technological Pedagogical Content Knowledge: A new framework for teacher knowledge. Teachers College Record 108 (6), 1017-1054. Fraenkel, J. R., & Wallen, N. E. (2003). How to design and evaluate research in education. New York, NY: McGraw-Hill.