A number of instruments have been developed to assess teachers’ levels of TPACK. This post summarizes the various attempts to measure TPACK, using the five categories identified and described in the chapter How Do We Measure TPACK? (Koehler, Shin & Mishra, 2012) from the Educational Technology, Teacher Knowledge, and Classroom Impact handbook. Links to the article (and to the PDF version of the article, when available) are provided for each instrument described. Also, when available, links to the instruments are provided.
Surveys
The TPACK Survey – Schmidt, et al. (2009)
This survey was developed as a collaboration between Iowa State University and Michigan State University to develop a survey instrument to measure each of the seven components of TPACK. Initial survey development was conducted from a pilot study of 124 pre-service teachers. The resulting survey demonstrated internal consistency reliability (coefficient alpha) ranging from .75 to .92 for the seven TPACK subscales. The survey has since been revised to modify some of the items. The result is 54 likert-type items to assess the seven components of TPACK in four different content areas: Math, Science, Social Studies, and Literacy. Additional items on the survey are used to collect demographic information and data about the teacher education program of teachers.
Many scholars have used the TPACK survey as a basis for assessing TPACK knowledge, modifying the survey to fit their particular needs, including translation, different subject-matters, and different contexts.
Researchers are free to use the TPACK survey, provided they contact Dr. Denise Schmidt (dschmidt@iastate.edu) with a description of their intended usage (research questions, population, etc.), and the site locations for their research. The goal is to maintain a database of how the survey is being used, and keep track of any translations of the survey that exist.
The survey, along with instructions are available here as:
Survey – Archambault & Crippen (2009)
This survey was designed to measure the seven components of TPACK of 596 Online K-12 teachers. Twenty-four survey likert-type items measure each of the seven components. Unlike the TPACK Survey by Schmidt et al, content knowledge is measured generally, not within 4-subdomains. Reliability testing in the form of Cronbach’s alpha coefficient was conducted for each of the subscales to determine the level of internal consistency. These levels were acceptable, (Gall, Gall, & Borg, 2003) ranging from alpha = .699 for the technology content domain to alpha = .888 for the domain of technology (Table 2).
The survey, along with instructions are available here as:
Open-Ended Questionnaires
Survey Instrument – So & Kim (2009)
The questionnaire was developed as part of a study to investigate pre-service teachers understanding of technology integration, through problem based learning (PBL). 97 pre-service teachers in Singapore were the participants, split by their student teaching location almost equally between primary and secondary schools.
Examples of the open-ended questions included “How, do you think, the problem based learning helped students learn?” and “What do you see as the main strength and weakness of integrating ICT tools into your PBL lesson?”.
Surveys are analyzed using thematic coding of the open-ended responses, and the resulting frequency count of the emerging codes.
Daily Journal Prompts – Niess, Lee, Sadria, Suharwoto (2006)
The authors conducted four weeks of professional development (PD) with in-service teachers to determine the impact of PD upon teachers’ TPACK. The purpose of these qualitative methods was to determine the stage of teachers’ TPACK: recognizing, accepting, adapting, exploring, or advancing. In conjunction an observation checklist and interviews (described in the interviews and observations sections), the authors conducted observations with a checklist modified from Shulman’s list for Pedagogical Content Knowledge (PCK).
The checklist included elements for comprehension, transformation, instruction, technology, and evaluation. For example, the transformation element included of the observation checklist included “preparation of technology being selected” and “adaptation to the characteristics and levels of the students.”
Performance Assessments
Teacher Work Sample (TWS) Rubric – Tripp, Graham & Wentworth (2009)
The authors investigated teacher work samples (TWS) to determine the extent of technology integration by pre-service, intern teachers. Initial analysis of TWS determined the rubric and guidance by the field instructors responsible for assessing TWS encouraged only teacher-use of technology. The authors introduced a clarified rubric for evaluating TWS that focused on student interns’ use of 21st century technologies and further training for the field instructors. The performance assessment rubric has a scale from 0 (not met/missing evidence), characterized by the absence of technology use, to 5 (exceeds expectation), characterized by student use of technology.
Pedagogically Inclusive Instrument – Harris, Grandgenett & Hofer (2010)
Lesson-Plan Coding – Kereluik, Casperson & Akcaoglu (2010)
Pre- and Post-Treatment Assessments – Graham, Borup & Smith (2012)
Interviews
Interview – Niess et al. (2009)
The authors described the five steps mathematics follow to integrate technology in their instruction: recognizing, accepting, adapting, exploring, and advancing. Then, the authors developed a rubric four themes which each included those five steps: curriculum and assessment, learning, teaching, and access. The authors interviewed a teacher who was formerly in their mathematics teacher education program. The teacher described the use of graphing calculators and the Geometer’s Sketchpad, and indicated that she used the Geometer’s Sketchpad in her geometry class, but not in Algebra 1. Further, in the three years she taught, the teacher shared that she had used technology just in one student-centered lesson. In conjunction with teacher interviews, the rubric may be used to determine the stage of a teacher in their integration of technology in mathematics instruction.
Pre- and Post-Observation Interview Protocol – Niess, Lee, Sadria, Suharwoto (2006)
The authors conducted four weeks of professional development (PD) with in-service teachers to determine the impact of PD upon teachers’ TPACK. In conjunction with content analysis of teachers’ daily journals and assignments and observations (described in the open-ended questionnaires and observations sections), the authors interviewed teachers before and after PD. The purpose of these qualitative methods was to determine the stage of teachers’ TPACK: recognizing, accepting, adapting, exploring, or advancing.
Examples from the pre-observation interview protocol include, “What should students know prior in order to be prepared for the work in the lesson?”, “Do you think the integration of spreadsheets with learning mathematics is a good idea? Why or why not?”.
Examples from the post-observation interview protocol include, “How did your planning for teaching this lesson change?”, and “Did your students learn the mathematics concept in this lesson?”
Observations
Observation Checklist – Niess, Lee, Sadria, Suharwoto (2006)
The authors conducted four weeks of professional development (PD) with in-service teachers to determine the impact of PD upon teachers’ TPACK. The purpose of these qualitative methods was to determine the stage of teachers’ TPACK: recognizing, accepting, adapting, exploring, or advancing. In conjunction with content analysis of teachers’ daily journals and assignments and interviews (described in open-ended questionnaires and interviews sections), the authors conducted observations with a checklist modified from Shulman’s list for Pedagogical Content Knowledge (PCK).
The checklist included elements for comprehension, transformation, instruction, technology, and evaluation. For example, the transformation element included of the observation checklist included “preparation of technology being selected” and “adaptation to the characteristics and levels of the students.”