6 Standardized Assessments
Overview Table
Assessment |
Purpose |
Format |
TeamSTEPPS (Team strategies and tools to enhance performance and patient safety) |
To improve teamwork in healthcare teams with the goal of bettering patient safety and care (King et al., 2008). |
Group of free online tools and strategies for healthcare teams based on 4 key competencies: Team leadership, Situation monitoring, Mutual support, Communication (Agency for Healthcare Research and Quality, 2023g). Includes many evaluation scales, such as the TeamSTEPPS Teamwork Attitude Questionnaire (T-TAQ; Agency for Healthcare Research and Quality, 2024b). |
ICCAS (Interprofessional Collaborative Competency Attainment Survey) |
Assesses the core competencies of interprofessional collaboration (communication, collaboration, roles and responsibilities, working together as a team, conflict management, and having a patient or family centered approach; Archibald et al., 2016; Interprofessional Education Collaborative, 2016). |
21 questions using a 5-point Likert-scale; Completed after IPE, but students reflect on their experiences both before and after IPE retroactively (Archibald et al., 2014). |
TOCK-IP (Tool to Observe the Construction of Knowledge in Interprofessional Teams) |
Helps clinical trainers guide their observations and provide higher quality feedback to the health professionals being trained (Floren et al., 2022). |
Observation based formative assessment containing 2 parts. Part 1 uses a checklist to assess attitudes, and Part 2 uses a 3-point rating scale to assess behaviours (Floren et al., 2022). |
iTOFT (Individual Teamwork Observation and Feedback Tool) |
Formatively assesses the collaboration abilities of students in interprofessional teams through the observation of teamwork behaviours (Thistlethwaite et al., 2016). |
Includes a basic version for learners with limited teamwork experience, and an advanced version for senior students and developing healthcare professionals. Both versions use observable behavior checklists. (Thistlethwaite et al., 2016) |
IPA (Interprofessional Professionalism Assessment tool) |
Observes and assesses students professional and collaborative behaviours as they provide care to a patient (Frost et al., 2018). |
Uses five-point Likert scales to assess 26 behaviours. Includes 2 open-ended questions where raters can comment on students’ strengths and areas for improvement regarding professionalism. (Frost et al., 2018) |
RIPLS (Readiness for Interprofessional Learning Scale) |
Assesses how prepared students are to learn interprofessionally (Binienda, 2015). |
Uses self-report five-point Likert scales with 19 items. Students complete the RIPLS before or after IPE. (Binienda, 2015) |
JTOG (Jefferson Team Observation Guide) |
Assesses the level of effectiveness in interprofessional teams through observation and evaluation of positive team behaviours (Collins et al., 2019). |
Formative assessment consisting of 14 Likert scales, 3 short answer questions and a debrief. Can be completed on paper, or on the JTOG mobile app. Has modified versions for patients, support people, and individuals. (Collins et al., 2019) |
TOSCE (Team Observed Structured Clinical Encounter) |
Assesses how well teams collaborate and communicate with one another while caring for a patient (National Center for Interprofessional Practice and Education, 2016). |
Modified from the McMaster-Ottawa scale, uses a 3-point scale (Lie et al., 2015). Includes a version for individuals and for teams (National Center for Interprofessional Practice and Education, 2016). |
Table 8: Overview table of standardized assessments used in Interprofessional Health Education
TeamSTEPPS is a methodical approach designed to incorporate collaboration and teamwork into healthcare practice, with the focus on bettering healthcare effectiveness, safety, and quality (Baker et al., 2024; Baker et al., 2006; King et al., 2008). TeamSTEPPS educates participants on tools and strategies needed to become proficient in team skills and overcome obstacles (King et al., 2008; Agency for Healthcare Research and Quality, 2023j). Their curriculum resources are free and available online (Agency for Healthcare Research and Quality, 2023j).
Core Competencies and Tools
TeamSTEPPS has 4 key skills health professionals can be trained to acquire that help a team function most effectively (Agency for Healthcare Research and Quality, 2023g; King et al., 2008) They are:
- Team leadership.
- Situation monitoring.
- Mutual support.
- Communication.
Each competency has has embedded tools to help healthcare team members succeed in achieving all 4 key skills (Agency for Healthcare Research and Quality, 2023g).
Team leadership: Effective teams will have an assigned leader who will create and monitor patient care plans (Agency for Healthcare Research and Quality, 2023d). Team leadership also focuses on equipping all team members to be competent to lead when necessary (Agency for Healthcare Research and Quality, 2023d).
Huddle: When there is a change to the care plan or an important update, one member (usually the leader) will pull the team aside to discuss the new information (Agency for Healthcare Research and Quality, 2023a).
Debrief: After an intervention, the team meets to discuss what occurred and reflect on their performance (Agency for Healthcare Research and Quality, 2023b).
Situation Monitoring: is a 3-step cycle with the goal of having all team members share a mental model (Agency for Healthcare Research and Quality, 2023f). In the first step, situation monitoring, each team member learns to be consistently aware of their environment (Agency for Healthcare Research and Quality, 2023g). This leads to an outcome of Situation Awareness (step 2), where each member has a better understanding of the situation, the patient, how the team is functioning and whether the team is reaching their goals (Agency for Healthcare Research and Quality, 2023g).
Tools for Situation Monitoring
Status of Patient, Team Members, Environment, Progress Toward Goal (STEP): Instrument used for ongoing surveillance of a medical scenario (Agency for Healthcare Research and Quality, 2023p). This helps the team identify important information about the patient, safety concerns, and identify when a change in the healthcare plan may be needed (Agency for Healthcare Research and Quality, 2023p).
I’M SAFE Checklist: A checklist each team member can review to assess their own well-being, and how it may impact their ability to safely provide care (Agency for Healthcare Research and Quality, 2023o).
Mutual support: Team members help each other by sharing tasks and providing feedback (Agency for Healthcare Research and Quality, 2023e). This decreases the risk a team member will become overworked and stressed, further reducing the chance of safety concerns and errors (Agency for Healthcare Research and Quality, 2023e).
Tools for Mutual Support
Concerned, Uncomfortable, Safety Issue (CUS): Is a technique for communicating a concern to a teammate (Agency for Healthcare Research and Quality, 2023m). Healthcare providers first identify their concern, then state why this makes them feel distressed (Agency for Healthcare Research and Quality, 2023m). If the concern is not addressed in the subsequent conversation, team members are directed to label the concern as a safety issue (Agency for Healthcare Research and Quality, 2023m).
Describe, Express, Suggest, Consequences (DESC): Is another mnemonic to address conflict (Agency for Healthcare Research and Quality, 2023n). Participants first describe the concerning situation, and then express their related emotions (Agency for Healthcare Research and Quality, 2023n). Next, the individual suggests an alternative way to approach the situation in the future (Agency for Healthcare Research and Quality, 2023n). Finally, any consequences to the patient, team members, care plan or team goals are communicated (Agency for Healthcare Research and Quality, 2023n).
Communication: Allows team members to coordinate actions and is the basis of success in all of the other competencies (Agency for Healthcare Research and Quality, 2023c). Communication is not always verbal or deliberate (e.g. body language) (Agency for Healthcare Research and Quality, 2023c).
Tools for Communication
Call Out: Occurs when one team member states important information in a loud and clear manner to directly inform all other members in the room (Agency for Healthcare Research and Quality, 2023k).
Check Back: A team member repeats back what has been communicated to them to clarify they have understood correctly (Agency for Healthcare Research and Quality, 2023l).
Situation, Background, Assessment, Recommendation and Request (SBAR): An effective communication strategy for critical medical situation updates and for providing appropriate courses of action (Agency for Healthcare Research and Quality, 2019). First, the team member identifies the concern with patient care, and then provides background information pertaining to the concern (Agency for Healthcare Research and Quality, 2019). Next, they identify the problem, and present a solution (Agency for Healthcare Research and Quality, 2019).
Evidence has shown positive outcomes in teamwork and patient safety by delivering training among thousands of healthcare providers (King et al., 2008). Following the tremendous success of TeamSTEPPS in healthcare settings, some universities have begun to introduce it into their Interprofessional Education curriculums (Margalit et al., 2009).
TeamSTEPPS curriculum was validated among many health profession programs including medicine, nursing, pharmacy, dentistry and radiology technician (Hobgood et al., 2010, Luebbers et al., 2016; Wright et al., 2013; Figueroa-Sierra et al., 2014; Reed et al., 2017; Horsley et al., 2016; Liaw et al., 2014; Djukic et al., 2012; Tofil et al., 2014; Headrick et al., 2012; Norsen & Spillane, 2012; Shrader & Griggs, 2014; Umland et al., 2017; Margalit et al., 2009; Aston et al., 2012; Gong, 2017; Peeters et al., 2017; Baker et al., 2015; Zhang et al., 2015; Posmontier et al., 2012).
Several student evaluation methods have been identified in TeamSTEPPS including: using an external evaluator, open-ended qualitative student reflections, competency-based knowledge quizzes, video vignettes, and word clouds.
Using an External Evaluator: Student performance is assessed by faculty, standardized patients or external raters in clinical simulation, volunteer work or projects (Luebbers et al., 2016; Reed et al., 2017; Baker et al., 2015; Posmontier et al., 2012). One tool in the external evaluator method is the Standardized Assessment for Evaluation of Team Skills (SAFE-TeamS). SAFE-TeamS conducts simulations of complex teamwork situations using learners in their professional roles and actors depicting other healthcare roles (Wright et al., 2013). Students were assessed before and after they received training on teamwork skills, and their scores were compared (Wright et al., 2013). Resolving conflict, being assertive, helping other team members, assessing the situation and communicating effectively were all considered positive team skill behaviours (Wright et al., 2013).
Open-ended Qualitative Student Reflections: Students respond to the questions in their own words, expressing their thoughts and feelings (Bahreini et al., 2022). Open-ended questions help students reflect more deeply on their experiences and understand the strengths and gaps in their learning (Bahreini et al., 2022). Umland et al., (2017) used this technique to assess pharmacy students’ amount of preparedness to work with other healthcare professionals.
Competency-based Knowledge Quizzes: IPE programs used competency-based knowledge quizzes to test whether students understood the material taught in IPE (Aston et al., 2012).
Video Vignettes: Videos that show and describe a situation in healthcare (Robertson et al., 2010). Robertson et al., (2010) had students watch video vignettes of a labour and delivery team to assess their ability to identify teamwork strengths and concerns in other healthcare professionals.
Word Clouds: Words provided as responses are grouped in a visually appealing format, where more frequent words are displayed in a larger size, and less frequent words are displayed in a smaller size. Peeters et al., (2017) used a word cloud to assess how students felt about other healthcare professionals and compared these responses before and after they had started working on interprofessional teams.
TeamSTEPPS Student Evaluation Scales
Specific scales have also been developed to evaluate student performance in TeamSTEPPS IPE programs, TeamSTEPPS offers several scales including the TeamSTEPPS Teamwork Attitude Questionnaire (T-TAQ) and the Team Performance Observation Tool (TPOT). Other validated scales include: the Teamwork Attitudes Instrument, the Interdisciplinary Education Perception Scale (IEPS), the Mayo High Performance Teamwork Scale (MHPTS), Team Emergency Assessment Measure (TEAM), and the IPEC Competency Self-Assessment Tool.
TeamSTEPPS Teamwork Attitude Questionnaire (T-TAQ): Measures student attitudes toward the 4 IPEC core competencies and team structure (Agency for Healthcare Research and Quality, 2024b). Participants rate statements on 5-point rating scales ranging from strongly disagree to strongly agree (Agency for Healthcare Research and Quality, 2024b).
Interdisciplinary Education Perception Scale (IEPS): Assesses students before and after an IPE course on their thoughts surrounding working with others, and how their levels of competence and self-determination (Shrader & Griggs, 2014). This was measured using 5-point rating scales ranging from strongly disagree to strongly agree (Shrader & Griggs, 2014).
Mayo High Performance Teamwork Scale (MHPTS): Used to identify teamwork and team leadership behaviours present and absent during activities (Malec et al., 2007). Evaluators rate whether they have observed various behaviours using a 3-point scale (0 = Never or rarely; 1 = Inconsistently; 2 = Consistently; Malec et al., 2007).
Team Emergency Assessment Measure (TEAM): This scale measures behaviours that may be exhibited by healthcare teams, to grasp how well they respond in an emergency situation (Cooper et al., 2016). Healthcare professionals are rated across a 5-point rating scale ranging from “never/hardly ever” (0) to “always/nearly always” (4; Cooper et al., 2016).
IPEC Competency Self-Assessment Tool: Students reflect on their own abilities in each of the IPE core competencies by rating the extent to which they agree or disagree with statements across a 5-point rating scale (1 = strongly disagree, 5 = strongly agree; Dow et al., 2016).
Limitations to Implementing TeamSTEPPS
There are several limitations to implementing TeamSTEPPS curriculum in IPE programs. One of the challenges is it requires rigorous training and a significant amount of faculty time to develop activities for students (Margalit et al., 2009; Shrader & Griggs, 2014).
Students were usually from different schools with different academic schedules, making coordinating schedules with all recruited students another challenge (Granheim et al., 2018). The combination of number of students from different health profession were not always proportionately equal which made difficult situation in communicating and working together (Reed et al., 2017; Liaw et al., 2014; Brock et al., 2013).
TeamSTEPPs Diagrams:
Infographic showing the key competencies and tools within the TeamSTEPPS framework [NewTab] (Agency for Healthcare Research and Quality, 2023c)
Infographic showing the 3 components of Situation Monitoring [NewTab] (Agency for Healthcare Research and Quality, 2023h)
Infographic showing components in the I’M SAFE Checklist [NewTab] (Agency for Healthcare Research and Quality, 2023o)
The Interprofessional Collaborative Competency Attainment Survey (ICCAS) was developed to assess the core competencies of interprofessional collaboration (communication, collaboration, roles and responsibilities, working together as a team, conflict management, and having a patient or family centered approach), including the three of the four IPEC core competencies (communication, roles and responsibilities, and working together as a team; Archibald et al., 2016; Interprofessional Education Collaborative, 2016). It is important to note that the ICCAS was built with the previous 2010 CIHC competencies, not the updated 2024 competencies (Archibald et al., 2014).
The ICCAS is a standalone tool which is usually completed after the IPE learning experience (Macdonald et al., 2010; Archibald et al., 2014). The students are asked to evaluate their pre- and post-learning experiences using a retrospective evaluation method (Archibald et al., 2014). The initial ICCAS tool had 20 questions using a 7-point Likert-scale ranging from strongly agree (1) to strongly disagree (7) (Macdonald et al., 2010; Archibald et al., 2014).
The original ICCAS has been validated by two studies (Archibald et al., 2014; Schmitz et al., 2016). Schmitz et al. (2016) examined the suitability of the ICCAS for implementation in the United States. They found that the original 20 questions of ICCAS was also consistent with the US IPEC Core Competencies (Schmitz et al., 2016). However, the authors made two changes to the original ICCAS tool (Schmitz et al., 2016). First, they changed the scale of rating from the 7-point Likert-scale format to a 5-point scale with options: poor (1), fair (2), good (3), very good (4), excellent (5) to decrease the time it took for students to answer each question (Schmitz et al., 2016).
Next, the authors included an additional question to understand how much the respondent’s skill had changed: “Compared to the time before [Fundamentals in Interprofessional Communication and Collaboration]…, would you say your ability to collaborate interprofessionally is …….” (select one): much better now (1); somewhat better now (2); about the same (3); somewhat worse now (4); or much worse now (5)” (Schmitz et al., 2016, p. 30).
The authors found a strong association between the new question and the other 20 questions of the original ICCAS tool (Schmitz et al., 2016). The new question was considered as a unique measure of change of self-assessed skill in IPCP, and had high internal consistency (Schmitz et al., 2016). A validation study confirmed that it is appropriate to be used in the full ICCAS tool (Ohtake et al., 2022).
Access the Interprofessional Collaborative Competency Attainment Survey (ICCAS) here:
Interprofessional Collaborative Competencies Attainment Survey (MacDonald et al., 2009) [NewTab]
TheTool for Observing Construction of Knowledge in Interprofessional teams (TOCK-IP) is an observational tool to evaluate interactive knowledge and behaviours in a clinical context. (Floren et al., 2022). It has formative and summative applications with the goal of providing consistent feedback.
The TOCK-IP was developed following the five-phase Gunawardena’s Interaction Analysis Model (p. 79) which is the prevailing conceptual model of knowledge construction referring to the phases of intellectual engagement. The TOCK-IP tool was divided into two parts. The first part of the tool was also split into five attitude related modes. (Floren et al., 2022)
“Mode 1 – Sharing or comparing
Mode 2 – Exploring divergence or disagreement
Mode 3 – Negotiating or Co-constructing knowledge
Mode 4 – Modifying, verifying, or evaluating/ testing
Mode 5 – Reaching agreement or application” (p. 411)
Each mode has sub parts with checkboxes and free text comment. Additionally, there is a rating scale for team level knowledge, observed behaviour and areas for improvement. (Floren et al., 2022).
Psychometrics
The validity, reliability, and utility of the initial TOCK-IP were investigated by Floren et al., (2022). Evaluators (university faculty members involved in IPE) were provided with background information and instructions on how to use the TOCK-IP. They then used it to assess videos of interprofessional student teams developing a patient care plan. After using the TOCK-IP, assessors were interviewed about how useful the assessment, instructions, and background information were. Results found “fair” inter-rater reliability, meaning rates agreed on the behavioral modes represented, and the categories they were placed in. Interviews and TOCK-IP feedback underwent thematic analysis, finding evaluators reported the tool to be feasible, and something they would consider using in their profession. (Floren et al., 2022)
Access the Tool for Observing Construction of Knowledge in Interprofessional teams (TOCK-IP) here:
Tool for Observing Construction of Knowledge in Interprofessional teams (TOCK-IP) (Floren et al., 2022, p. 413) [NewTab]
- Online supplemental materials include:
-
- Background information on the TOCK-IP and examples of interprofessional team conversations (Floren et al., 2022)
-
- Instructions on how to assess students using the TOCK-IP (Floren et al., 2022)
The individual Teamwork Observation and Feedback Tool (iTOFT) has been designed to facilitate formative evaluation of members of interprofessional teams (Thistlethwaite et al., 2016). Its purpose is to provide a concrete framework for evaluating the collaborative ability of students by measuring observable teamwork behaviors (Thistlethwaite et al., 2016). The researchers who validated the iTOFT tool mentioned that it functioned as a tool for providing structured feedback on observable behaviors that were associated with collaborative skill development and shared decision-making (Cyr et al., 2020).
This tool has two versions, Basic and Advanced (Thistlethwaite et al., 2016). The basic version was designed for learners who have limited experience with clinical teamwork (Thistlethwaite et al., 2016). It comprises eleven observable behaviors organized into two categories: “teamwork” (consisting of four items) and “shared decision making” (comprising seven items; Thistlethwaite et al., 2016). The Advanced version of the iTOFT, designed for senior students and junior health professionals, comprises four headings and ten observable behaviors: “leadership” (two items), “shared decision making” (three items), “working in a team” (three items), and “patient safety” (two items; Thistlethwaite et al., 2016). The observation scales of both versions are identical (Thistlethwaite et al., 2016).
The iTOFT tool has been implemented in a variety of students, including medicine and pharmacy (Cyr et al., 2020; Parker et al., 2018; Nicolaides et al., 2019; Margolis et al., 2021; Crowl et al., 2019).
Psychometrics
Almoghirah et al. (2021) concluded that the iTOFT tool fulfilled the standard criteria of validity and reliability in greater proportion than other tools. Pharmacy students and preceptors reported the iTOFT tool was feasible to use from a satisfaction survey (Margolis et al., 2021).
Strengths
This tool fostered student engagement in actively improving their academic performance through skill development and learning enhancement (Nicolaides et al., 2019). In addition, respondents indicated that it was not challenging to find opportunities to complete the iTOFT activity and that its completion did not disrupt the workflow (Margolis et al., 2021; Crowl et al., 2019). Students and preceptors reported they felt the iTOFT could be finished in a fair amount of time (Margolis et al., 2021).
Limitations
iTOFT has several limitations. Some studies reported that faculty members encountered difficulties in rating student’s performance based on the checklist (Parker et al., 2018). Parker et al. (2018) found iTOFT is a challenging instrument to employ when evaluating entire teams because the assessor has to evaluate multiple students at the same time. One study identified low fidelity in iTOFT because many participants did to complete all of the required steps (Margolis et al., 2021). Margolis et al. (2021) suggests increasing the fidelity of this tool in future by investigating why steps are being missed.
Access the Individual Teamwork Observation and Feedback Tool (iTOFT) here:
Individual Teamwork Observation and Feedback Tool (iTOFT) (Thistlethwaite, 2015) [NewTab]
Resource pack for the iTOFT (iTOFT Consortium, 2015) [NewTab]
Interprofessional Professionalism Assessment (IPA)
The Interprofessional Professionalism Assessment (IPA) evaluates the collaboration and professionalism of health care students by observing their behaviours while providing person-centered care (Frost et al., 2018) The conceptual foundation of the IPA tool included defining interprofessional professionalism to include core values of professionals working together to provide patient-centred care (Hammer et al., 2012, Holtman et al., 2011 & Frost et al., 2018). There are 26 behavioral elements on the IPA, which correspond to six categories and are constructed on a 5-point scale (Frost et al., 2018). The six categories of professionalism are: Altruism and caring, Excellence, Respect, Communication, Ethics and Accountability (Frost et al., 2018). Each behaviour is rated across a 5-point Likert scale ranging from strongly disagree (1) to strongly agree (5)(Frost et al., 2018). The raters are given the opportunity to provide feedback on two qualitative items: “overall strengths related to interprofessional professionalism” and “areas for improvement related to interprofessional professionalism” (Frost et al., 2018).
Validity and Reliability
The IPA’s reliability and validity have been demonstrated through psychometric outcomes across numerous health professions and practice settings (Almoghirah et al., 2023a; Almoghirah et al., 2023b; Keshmiri et al., 2022; Hosseinpour et al., 2022; Gilliam et al., 2020; Tegzes & Frost, 2021). The reliability was ensured by assessing the internal consistency and reproducibility (Keshmiri et al., 2022). The IPA tool stands out from other IPE evaluation tools because it measures both professionalism and communication (Frost et al., 2018).
After the development and testing of the IPA, it was implemented in variety of fields, including pharmacy, medicine, surgery, nursing, and veterinary medicine (Almoghirah et al., 2023a; Almoghirah et al., 2023b; Keshmiri et al., 2022; Hosseinpour et al., 2022, Welch, 2024; Tegzes & Frost, 2021).
Strengths
The most advantageous aspect of the IPA is its broad applicability across numerous health professions and practice contexts (Frost et al., 2018; Almoghirah et al., 2023a; Almoghirah et al., 2023b; Keshmiri et al., 2022; Hosseinpour et al., 2022; Gilliam et al., 2020; Tegzes & Frost, 2021). In domains where conventional tools struggle to assess learners, the IPA may assist clinical educators and evaluators in delivering summative and formative feedback to students (Tegzes & Frost, 2021).
Limitations
Similar to other behavioral and observational tools, IPA has limitations. Different preceptors may interpret and rate the items differently, and vary in total evaluation time (Frost et al., 2018). The IPA assessments took a long time to complete (up to 2 hours per student), which could be difficult for assessors to incorporate into their work schedule (Almoghirah et al., 2023a).
Access the Interprofessional Professionalism Assessment tool (IPA) here:
Interprofessional Professionalism Assessment tool (IPA) (Interprofessional Professionalism Collaborative, 2018) [NewTab]
The Readiness for Interprofessional Learning Scale (RIPLS) evaluates the extent to which students are prepared for Interprofessional learning. It was created by Bligh and Parsell in 1999, when the importance of collaboration was being emphasized for healthcare professionals. Students rate themselves on 19 5-point statements ranging from Strongly Agree to Strongly Disagree before and/or after IPE. The statements are grouped into 3 categories- Teamwork and Collaboration; Negative and Positive Professional Identity; and Roles and Responsibilities. Each category corresponded to a dimension of learning theory- knowledge and skills; values and beliefs; and actual behaviour, respectively. (Binienda, 2015)
Psychometrics
TThe RIPLS was tested using 120 students from 8 different health professions. The original version had 45 statements, but 26 were removed which increased the Cronbach’s alpha value to 0.90. (Binienda, 2015)
Access the Readiness for Interprofessional Learning Scale (RIPLS) here:
Readiness for Interprofessional Learning Scale (RIPLS) (Binienda, 2015) [NewTab]
The Jefferson Team Observation Guide (JTOG) is a 17-item formative assessment, designed to evaluate the collaborative behaviours in teams (Collins et al., 2019; Jefferson Center for Interprofessional Practice & Education, n.d.). It has two primary goals when used in clinical practice: to show how effective interprofessional teams benefit patient care, and to bring the perspectives of patients and family members into evaluations (Philadelphia University & Thomas Jefferson University, 2018). The JTOG is labeled as taking a “360° approach” because it includes the perspectives of the student, assessor, patient and their support person (Collins et al., 2019). It includes 14 Likert scales, 3 open ended questions, and a debrief session, and can be used for both pre- and post-licensure students (Collins et al., 2019; Jefferson Center for Interprofessional Practice & Education, n.d.).
In addition to the original version of the JTOG that assesses interprofessional teams, there are 3 other versions of the developed for different contexts: the patient JTOG (5-6 mins), the support person JTOG (can be used if the patient is too sick to fill out the patient version themselves), and the individual JTOG (5-6 mins). All versions except the individual version include the standard 17-items, but the wording is modified to fit the person assessing the care provider. The patient and support person assessments include additional questions about how patient-centered the care they are receiving is. The individual version (for students assessing themselves or a peer, or an instructor assessing a single student) have 13 Likert-scales and 2 open-ended questions. It was shown to be reliable through exploratory factor analysis (EFA) and confirmatory factor analysis (CFA). (Collins et al., 2019)
Rationale and Creation Process
The JTOG was designed after researchers realized the difficulty in creating interprofessional teams in practice, and assessing them in a simple, quick and measurable way (Smith et al., n.d.). Existing tools that measured evidence-based team behaviours (e.g. communication, leading and making decisions, and working together) have limitations of taking too long to complete, being too abstract, not measuring the most important behaviours and measuring attitudes instead of behaviours (Smith et al., n.d.; Collins et al., 2023; Lyons et al., 2016).
In 2012, the first version of the JTOG was based off of an observational assessment created for a colon surgery clinical practice experience for medical, nursing and pharmacy students (Lyons et al., 2016). Being guided by the Interprofessional Education Collaborative (IPEC) core competencies, and IPE stakeholders at Thomas Jefferson University, a draft of the scale was created (Lyons et al., 2016; Smith et al., n.d.). The Jefferson Centre for Interprofessional Education (JCIPE) reviewed this draft, and then it underwent 3 piloting studies (Smith et al., n.d.). The first pilot study involved healthcare students using the JTOG to review the teamwork behaviours within the presentation of a case of a patient in rehabilitation (Lyons et al., 2016). In the second pilot study, teams of students were recorded as they developed a patient care plan for an individual with a chronic illness (Lyons et al., 2016). The form was used for the students to assess themselves as they watched the video back (Lyons et al., 2016). After both piloting studies, students provided their feedback on the tool, and this was implemented to create a 2nd draft of the JTOG (Lyons et al., 2016; Smith et al., n.d.). This second draft was then examined by JCIPE, and observational activities were added to it (Smith et al., n.d.). The third pilot study had medical and nursing students revise the second draft of the JTOG while they witnessed an IPE event (Lyons et al., 2016). Then the JTOG underwent validation studies to confirm its psychometrics (Smith et al., n.d.).
Psychometrics
Four studies assessed the psychometrics of the JTOG (Smith et al., n.d.; Lyons et al., 2016). The first study involved nursing and medical students who watched interprofessional teams, filled in the JTOG, and then debriefed about their experiences. Students agreed on most items in the scale, demonstrating consistency, and they were able to recognize the qualities found in effective healthcare teams. However, the generalizability of the results was limited due to the small sample size. The comments of participants who completed the reflection activity aligned with the results of the JTOG. (Smith et al., n.d.)
Two other studies had students complete the JTOG while witnessing a team of students having a meeting about the case and care plan for an individual in rehabilitation. Similarly, they found high reliability and internal consistency with Cronbach alpha values of 0.97 and 0.98. (Lyons et al., 2016)
Another study verified its predictive and face validity by assessing and comparing scores of an effective, and ineffective team. Results found a significant difference in JTOG scores between teams, demonstrating validity. (Lyons et al., 2016).
JTOG Mobile App
Thomas Jefferson University developed the JTOG mobile app to help the assessment be more accessible to students, faculty and other observers (Jefferson Center for Interprofessional Practice & Education, n.d.). The JTOG app allows IPE programs to improve communication, teamwork, training and education about the roles of healthcare professions, develop healthcare tools and frameworks, and increase involvement of patients and families in their care (Philadelphia University & Thomas Jefferson University, 2018). Assessors can easily share their feedback with students through the app, which has been found to be more detailed and longer than feedback on the paper version, and data collected contributes to the development of national standards (Philadelphia University & Thomas Jefferson University, 2018; Collins et al., 2019). Compared to completing the paper assessment after observing students, the app makes it easier for assessors to evaluate students while they observe them (Lyons et al., 2016). The JTOG app includes all 4 types of the JTOG: the team version, the patient version, the support person version, and the individual version (Collins et al., 2019).
Access the Jefferson Team Observation Guide (JTOG) here:
- Jefferson Interprofessional Observation Guide
-
- Team version (Thomas Jefferson University, 2014) [NewTab]
-
-
- Additional guidelines when completing the team JTOG (Lyons et al., 2016, p. 53c) [NewTab]
-
-
- Patient version (access on the app below)
-
- Support Person version (access on the app below)
-
- Individual version (access on the app below)
- JTOG Mobile App
-
- Google Play Store [NewTab]
-
- Apple Store [NewTab]
- Activities Using JTOG (Smith et al., n.d.) [NewTab]
-
- Reflection Activity (Slide 8)
-
- Rehabilitation Conference Observation Activity for Occupational Therapy and Physical Therapy students (Slide 9)
-
- Clinical Observation Discharge Activity for Occupational Therapy, Physical Therapy, Pharmacy and Medical students (Slide 10)
-
- Ambulance Care Activity for Nursing, Couple and Family Therapy, Medical and Pharmacy students (Slide 11)
-
- Colorectal Surgery Center Clinical Rounding for Medicine and Pharmacy students (Slide 12)
Developed out of the McMaster-Ottawa scale, the Team Observed Structured Clinical Encounter (TOSCE) evaluates student teamwork behaviours during care delivery. The assessment includes the same 6 themes as the McMaster-Ottawa scale: communication, collaboration, roles and responsibilities, using a patient-family-centered approach, conflict management and team functioning. There is a Team version, which includes a space to rate the team’s overall performance, and an Individual version to rate each person on the team. Evaluators formatively rate students (or the team) on a 3-point scale: below expected, at expected, and above expected. The scale includes a description of what behaviours at each level for each category look like (see behaviour anchors below). It is important to note this scale does not assess students’ competency in their profession (Lie et al., 2015). (National Center for Interprofessional Practice and Education, 2016)
Modifications from the McMaster-Ottawa scale
- Addition of Behavioural Anchors: A lack of behavioural anchors created a problem in being able to apply the scale in pre- or post-professional healthcare settings (Lie et al., 2015). Lie et al., (2015) added a list of behaviours that represent each competency that assessors can use to rate students that were linked with improved patient outcomes (e.g. “Communication: Above Expected: Expresses opinions in an objective, confident manner; speaks calmly in disagreements; shows deference; listens carefully; asks clarifying questions; responsive to non-verbal clues”; Individual Instrument, p. 3).
- Reduction from 9-point to 3-point Scale: The scale was reduced to 3-points (below expected, at expected, above expected) because it would be too difficult to have a behavioural anchor for every level (1-9) of each competency, especially when considering variation from student to student. Researchers thought 3-levels would be easier to identify, yet still distinguishable enough. (Lie et al., 2015)
Validity
Lie et al., (2015) conducted a study to validate the TOSCE involving students in occupational therapy, pharmacy and medicine. The goal was to determine whether faculty could confidently complete the TOSCE, and correctly recognize levels of student behaviours after 1 hour of training with video examples. Interprofessional teams of four students participated in a 35-minute stroke simulation. Results found raters were more lenient when marking students individually, but the majority felt they were provided with sufficient time to complete the scale. Faculty felt more confident assessing teams than individual students, and were least successful at rating students in the below expected category. (Lie et al., 2015)
Access the Team Observed Structured Clinical Encounter (TOSCE) here:
The University of Alberta developed a TOSCE Assessment and Debriefing Guide [NewTab] which is a printable form of the TOSCE that leaves space for comments and written feedback for students. This guide provides a more detailed assessment by including behaviour checklists, student team roles, and option to award partial points.
The validation study [NewTab] by Solomon et al., (2011) includes 10 examples of case studies that can be used for the TOSCE.
Access other IPE assessments in the Assessment and Evaluation Tool database [NewTab] by the National Center for Interprofessional Practice and Education.