6 Chapter 6: Assessment Skills

Tracy Hey; Kathryn Reeves; and Conor Barker

While current frameworks, such as the Response to Intervention (RTI) and Multitiered Systems of Support (MTSS), move away from traditional individual-based assessments in favour of providing early interventions (Weiner et al., 2021), school psychologists continue to spend the majority of their time on assessment-based activities (McNamara et al., 2019; Umaña et al., 2020). Research has consistently demonstrated this pattern (e.g., Benson et al., 2019; Corkum et al., 2007; King et al., 2021), despite indications that school psychologists prefer to spend less time on individual assessments and more time providing direct services (Weiner et al., 2021). Many school psychologists view assessment skills as a core to their educational and clinical practice (Corkum et al., 2007). Assessment plays a vital role in establishing cognitive or behavioural patterns that allow for a diagnosis to be made. Through the practice of assessment, school psychologists can contribute to the development and implementation of a treatment plan.

[Image 1]

Learning Objectives

  • To understand the hows and whys of assessments in school psychology.
  • To understand how the history of psychoeducational assessments can inform current assessment practices.
  • To understand the unique perspective school psychologists offer in diagnosing children.
  • To understand the utilization of different methods of assessment.
  • To critically reflect on the ethical components of assessment practices.
  • To appreciate the basic structure and delivery of assessment findings.

Psychological Assessment

What is An Assessment?

The term assessment refers to the objective evaluation of an individual obtained through the deliberate collection of data (Thompson et al., 2018). Gathering this data from various sources, psychological assessments in the school setting evaluate cognitive processes, behaviour, motivation, aptitude, and social-emotional adjustment (Canadian Psychological Association [CPA], 2007; Preston & Claypool, 2021) that impact a student’s ability to thrive in the classroom.

Why Conduct an Assessment?

The primary purpose of a psychological assessment is to provide a professional opinion, based upon multiple sources of information, to a specific referral question or to inform a practical plan of action addressing an identified area of need or risk (CPA, n.d). School psychologists commonly assess students’ cognitive, social, academic, and behavioural functioning using observational measures, self-report data, and standardized tests. These measures are often complemented through unstructured data, such as information from caregivers and teachers. The end product of an assessment allows for a comprehensive understanding of an individual’s strengths, adaptive skills, areas of difficulties, and environmental factors that facilitate and impede classroom learning and adjustment.

Who is Involved in an Assessment?

Ideal assessment practices in school psychology stress the importance of a collaborative approach. Working with school personnel, students, custodial guardians, and other health professionals (e.g., physicians, occupational therapists, or speech-language pathologists), a school psychologist works to create healthy learning environments and to improve the function of all children, not only at the school level but also within family and community contacts (Jordan et al., 2009). Research by King and colleagues (2022) found that when looking at collaborative practices, school psychologists are most effective at involving caregivers and school personnel in the assessment process and less likely to collaborate with physicians or other professionals due to reported time constraints.

How are Assessments Completed?

Completing an assessment often involves selecting multiple tests or methods (King et al., 2022) to fully address a referral question, including using standardized testing, interviews, formal observations, and consultation with other health professionals (Benson et al., 2019). While standardized assessment tools can be used to address a variety of presenting problems, such as thinking and reasoning ability, memory, visual motor skills, academic performance, and executive function (Benson et al., 2019), other formal assessment practices may be used to determine behavioural or social patterns consistent with diagnostic criteria. These formal assessment measures are further complimented through observations and interviews. Finally, assessment data is informed through reviewing cumulative records, such as previous assessments, academic performance, and medical records (King et al., 2022).

Where are Assessments Conducted?

While standardized tests are completed in a secure and private location, other facets of assessment data come from more informal settings, such as the classroom, interactions with peers, and reflections provided by custodial caregivers. While assessments have traditionally relied on face-to-face interactions and observations, there has been a recent surge of interest in the efficacy of virtual assessments.

Spurred by the occurrence of the novel coronavirus SARS-CoV2 (COVID-19), researchers are currently investigating how assessments can be reliably used in virtual formats (Ritchie et al., 2021). This research is critical for school psychologists, who were forced to adapt when the COVID-19 pandemic necessitated work-from-home mandates (Ritchie et al., 2021). During this time, schools struggled to provide adequate services to the student population, particularly students who demonstrated the need for a higher level of care (Stifel et al., 2020). Without the development of effective and reliable virtual options for assessments, students with higher needs may continue to fall through the cracks. Furthermore, Canadian school psychologists in rural and remote populations will specifically benefit from expanding knowledge of virtual assessment practices, as this will allow them to ensure all students have equal access to assessments regardless of geographical location (Ritchie et al., 2021).

How Assessment Information is Used

Information gathered by an assessment is vital for the classification and diagnosis of a student, both of which aid in selecting the most appropriate intervention available. Accurate assessments allow medical and mental health practitioners to effectively communicate and collaborate with the children they are serving (Lockwood et al., 2022). Evidence suggests that the earlier an assessment can take place, leading to earlier interventions, the greater the likelihood that children will have an improved developmental trajectory (Koegel et al., 2013). Psychoeducational assessments are often necessary for students to receive access to additional educational support services (Kozey & Siegel, 2008). Comprehensive assessment practices are especially vital for those who need to make important decisions about where and how to educate a student or in cases where a diagnostic label may be particularly stigmatizing (Wodrich et al., 2006).

A Historical View of Assessments

Psychologists must be aware of the history, development, and resulting cultural biases associated with the assessments they administer to ensure culturally responsive and equal service delivery (Miller et al., 2021). The historical mistreatment of those with disabilities and mental illness is well documented. Children with physical, mental, and emotional disabilities were frequently placed in institutional care and shunned by society.

Fortunately, contemporary psychological testing and assessment history was built on more humane foundations. In the early 20th century, France became one of the first countries to mandate free public education to all children (Cohen et al., 2022). French educators wanted an efficient, accurate, and fair method of deciding which classroom environment individual children would be best suited in for positive learning outcomes. With this task in mind, psychologist Alfred Binet and his colleague Theodore Simon were asked by the Ministry of Public Instruction to study behavioural problems in children that might impede the ability to follow regular curriculums (Farrell, 2010). Binet then developed a test that could be used to help school personnel make these placement decisions (Cohen et al., 2022). The Binet-Simon test became the first practical intelligence test.

Psychologists in other countries, such as the first educational psychologist in the United Kingdom, Cyril Burt, saw the benefits of using such tests to help solve the problem of classifying children (Farrell, 2010). However, Binet himself warned that the test was not intended to measure intelligence in totality (Michell, 2012). This warning was not heeded by other countries, who saw it as an opportunity to quickly assess large numbers of people. Lewis Terman revised the original Binet-Simon test in 1916 (Wilson, 2011), renaming it as the Stanford-Binet and norming it on a large sample of adult military recruits in the United States of America prior to World War II (Cohen et al., 2022; Boake, 2002). While the large-scale use of cognitive assessments helped to establish the importance of psychologists, it also negatively impacted many diverse individuals. These early assessments often neglected marginalized populations and led to eugenic practices to falsely claim validity. By not considering the inherent cultural bias in these early assessments, children and adults were subjected to racism and unethical treatment based on quantitative assessment scores (Serpico, 2021).

[Image 2] [Image 3]

A significant development in the history of intelligence assessment was the creation of a new measurement by David Wechsler. In his early career, Wechsler enlisted with the U.S. Army, where he trained as a psychological examiner. This experience seemed to have had a lasting impact on Wechsler, who often remarked that his wartime experiences highlighted shortcomings in intelligence assessments (Boake, 2002). He noticed that normally functioning adults would often fail the intelligence tests issued by the military, which he attributed to the emphasis on formal education reflected in the language of the Stanford-Binet scale (Boake, 2002). Wechsler believed that intelligence involved many different mental abilities, describing intelligence as the global capacity of a person to act purposely, think rationally, and deal effectively with their environment (Boake, 2002). The Wechsler-Bellevue scale was developed to address this shortcoming, incorporating both verbal and performance scales standardized on a large and diverse sample (Boake, 2002). Wechsler’s tests have since had many revisions, including specific test batteries for use on children (Kaufman et al., 2006).

[Image 4]

The Cattell-Horn-Carroll (CHC) theory of cognitive abilities presently guides test development and interpretation in psychology (McGill et al., 2019). The CHC theory of intelligence is influenced by the work of Raymond Cattell, John Horn, and John Carroll. The synthesis of Carroll’s three-stratum theory and Horn-Cattell’s Gf-Gc theory share many commonalities, and it is now referred to as the CHC theory (Caemmerer et al., 2020; Cohn et al., 2022). The CHC theory of cognitive abilities has been referred to as the most comprehensive and empirically supported psychometric theory of the structure of cognitive abilities to date (Reynolds et al., 2013). It has an impressive body of empirical support in the research literature and is the foundation for selecting, organizing, and interpreting tests of intelligence and cognitive abilities (McGill et al., 2019).

The use of CHC as an intelligence theory has faced longstanding criticism due in part to its quick rise in popularity without much critical review. While some inherent problems were pointed out from the start, they were largely ignored (Canivez & Youngstrom, 2019). Current literature is questioning this practice. Assessments that primarily base interpretations on broad attributes are problematic, and many lack the basic psychometric properties that allow for confident interpretation (Canivex & Youngstrom, 2019). One example is the Woodcock-Johnson III (WJ-III; Woodcock et al., 2001), the first instrument constructed in alignment with CHC theory. An analysis of the WJ-III showed that the factors used did not fully align with CHC theory, only measuring a handful of the nine factors the CHC espouses (Dombrowski & Watkins, 2013).

The development of these and other intelligence assessments was a vital factor in establishing the importance of educational and school psychologists as a profession (Ferrall, 2010). Miller and colleagues (2021) have noted that the Stanford-Binet and Wechsler tests are still among the most frequently used intelligence assessments to date, with the Wechsler scales explicitly being among the most widely used by school psychologists (Benson et al., 2019; Lockwood et al., 2022). Nevertheless, there are essential considerations that arise from the complicated history of assessment development for modern psychologists to keep at the forefront of their assessment practice.

Intelligence-based assessments often reinforce a narrative among the general population that IQ and achievement are perfectly correlated. However, research has demonstrated that the amount of variance accounted for by these factors is only 36% (Sattler, 2001), indicating that the vast majority of academic student achievement is related to factors beyond a student’s IQ. Additionally, critics of standardized intelligence assessments point to how this method fails to consider how schools or families may contribute to problem behaviours (Farrell, 2010). While the environmental impact on behaviour is well established, assessment batteries may negate unique qualitative data that inform findings in favour of generalizable numerical scores.

Unique Perspectives of School Psychologists

School psychologists offer a unique and vital perspective on the diagnostic process in school-aged children. While the medical model used by physicians relies on a working diagnosis, where the diagnosis is emphasized through a summative model, behavioural and academic problems are centred on a problem existing primarily within the child (Farrell, 2010). A medical model approach to adolescents is often criticized for ignoring unique contextual and environmental factors that influence the presentation of a child’s symptoms. Social norms further complicate this, which teaches people to accept medical diagnoses without question (Farrell, 2010). Medical physicians typically diagnose by identifying symptoms and choosing the most likely diagnosis.

Conversely, a school psychologist utilizes differential diagnosis methodology. This methodology involves a holistic approach of combining behavioural, cognitive, and emotional patterns to identify the most likely diagnosis. By abandoning the traditional medical model, school psychologists can practice through an inclusive lens and support school-based systems more effectively (Farrell, 2010).

The Four Pillars of Assessments

Jerome Sattler is one of the world’s foremost experts on childhood assessment (Greathouse & Shaughnessy, 2010). He created a textbook, Assessment of Children (Sattler, 2001), that is still often used in university cognitive assessment courses (Miller et al., 2021). This book is comprehensive in that it details many aspects of cognitive assessment, including ethical impasses, legal issues, history, theories of intelligence, assessment of diverse students, and report writing (Miller et al., 2021). In this work, Sattler proposed four pillars of assessment, which have been instrumental in delivering assessment services to children. These pillars are (1) norm reference tests, (2) observations, (3) interviews, and (4) informal measures. The four pillars of assessment complement one another, forming a firm foundation for decision-making procedures (Sattler, 2001).

[Image 5]

Norm-Referenced Tests

Raw assessment scores are rarely reported. Instead, psychologists use norm referencing (Lok et al., 2016). Also known as standardized tests, norm-referenced tests are standardized on a representative group so that each score reflects a rank within the population. Norm-referenced tests have been developed to assess academic and cognitive areas, including intelligence, arithmetic, literacy, visual motor skills, fine motor skills, and adaptive behaviours. Numerous well-established and psychometrically sound tests evaluate children’s behaviour, intelligence, and academic achievement (Sattler, 2001). Through the use of norm references assessments, school psychologists can quickly identify which students are performing above or below the average range, determining if these students meet the cut-off scores required for further interventions. It is important to note, however, that norm-referenced tests often lack ecological validity as they reflect the performance of an artificial testing environment (Ebert & Scotta, 2014). Additionally, for the results of a norm-referenced assessment to be valid, the student being assessed must match the normative sample the test was referenced on.

Observations

Observing children interacting with their environment provides valuable assessment information. While observational assessments take longer (McCoy, 2019), they capture interactions and behavioural patterns of children in their environments, allowing for detailed data with a high level of ecological validity (Thibodeau-Nielsen et al., 2021). To conduct observations, the observer must first define the behaviours of interest, how the behaviour will be coded, and the duration of the observational period (Thibodeau-Nielsen et al., 2021). Observational data can be gathered through observing children in various interactions, such as in the classroom, with peers, with caregivers, and during other formalized assessment tasks. While observational data tends to be a more objective measure of understanding children’s behaviour, they are limited by how behaviour is conceptualized and may lack the ability to understand what is driving a behaviour (McCoy, 2019).

Interviews

Interviews have a long history of use in completing robust assessments for school psychologists (Burke & Demers, 1979). Interviews allow contextually-based data provided by the child, guardians, teachers, and school personnel to be integrated into assessment findings and treatment plans. Seen as less restrictive than formal testing, interviews provide a context that allows participants to ask for clarification, elaborate on ideas, and explain perspectives in their own words (Harris & Brown, 2010). Diagnostic interviews are an integral element of research and clinical practice in establishing diagnostic accuracy (Schneider et al., 2022).

Informal Measures

Informal measures that school psychologists may review in the assessment process include previous assessments, academic examples, and relevant medical records. These records provide valuable insight into what was tried before, how the child responded to intervention attempts, the child’s strengths, and how the child interacts with members of their social circles. These measures allow psychologists to consider where a student is struggling and where they are thriving, allowing for a nuanced understanding of the construct under investigation.

Types of Assessments

School psychologists conduct a variety of assessments on each referred student to address the referral question(s), including interviews, observations, behavioural rating scales, and standardized and non-standardized tests (Kranzler et al., 2020).

[Image 6]

Cognitive Assessments

Administering cognitive assessments is a critical competency for school psychologists (Griffin & Christie, 2008), with intelligence tests among the profession’s most widely utilized assessments (Kranzler et al., 2020). All school psychologists are required to be trained in assessing cognitive abilities (Aldonso et al., 2000), including skills in test administration, interpretation of results, reporting findings, and assessing diverse children. The data provided through cognitive assessments is an integral part of identifying students with learning disabilities, intellectual disabilities, or giftedness (Lee et al., 2022; Machek & Nelson, 2010).

In recent years, most cognitive-based assessments were modified align with the Cattell-Horn-Carroll (CHC) theory (Sotelo-Dynega & Dixon, 2014). The CHC model de-emphasizes general intelligence (IQ), instead favouring the evaluation of eight broad cognitive abilities: fluid reasoning, crystallized intelligence, visual processing, processing speed, auditory processing, short-term memory, long-term retrieval, and quantitative knowledge (Fiorello et al., 2009), as well as 69 narrow abilities. Thus, the CHC theory assesses core skills required to perform cognitive tasks. However, while school psychologists often understand the use of the CHC model, teachers or in-classroom supports may be less likely to understand the broad abilities espoused by CHC theory. Therefore, teachers need to be given opportunities for psychoeducation when debriefing the results of reports to understand better the interpretation and application of findings (Fiorello et al., 2009). Furthermore, while CHC is popular amongst school psychologists, its reliability remains in dispute. General intelligence scores (IQ) are still the best indicator of present and future academic performance.

Cognitive assessments should be selected according to psychometric properties, reason for referral, and unique characteristics of the examinee (Sotelo-Dynega & Dixon, 2014). Examples of cognitive assessments include the Wechsler Intelligence Scale for Children – Fifth Edition (WISC-V; Wechsler, 2015), the Differential Ability Scales – Second Edition (DAS-II; Elliott, 2007), the Comprehensive Test of Nonverbal Intelligence (CTONI; Hammill et al., 1997), and the Woodcock-Johnson IV Tests of Cognitive Abilities (WJ-IV COG; Woodcock et al., 2015).

Behavioural Assessments

Students engage in behaviours for many purposes, such as attention as a result of behaviours, resulting gain, escaping from unpleasant demands, and meeting sensory needs. Therefore, behavioural assessments are used to understand a student’s problem behaviour better, identify events that predict or maintain the problem behaviour, and support the development of behaviour modification plans (McIntosh & Av-Gay, 2007). Behavioural assessments are historically linked to B. F. Skinner‘s work on behaviour modification, where behaviour is linked to vital information on the antecedent, behaviour, and consequence of any given action (Barnhill, 2005). Methods of behavioural assessments vary, including categories such as self-report, direct observations, data obtained from school administration, and adaptive behaviour rating scales (Chafouleas et al., 2010). Behavioural assessments can sometimes be limited due to their reliance on behaviours that have already manifested. Ideally, students should be assessed before developing problematic behaviour to provide early intervention. To address this limitation, a combination of methods is necessary.

Examples of behavioural assessments include the Vineland Adaptive Behavior Scales Third Edition (VABS-III; Sparrow et al., 2016), the Autism Behavior Checklist (ABC; Krug et al., 1980), Conners Parent and Teacher Rating Scales – Third Edition (Conners, 2008), and the Behavior Assessment System for Children (BASC; Merenda, 1996).

Narrative Assessments

Narrative assessments are a qualitative method used to understand the implicit meaning of personal accounts (Esquivel & Flanagan, 2007). Rooted in the philosophy of hermeneutics and consistent with the Greek root hermeneutika, it means “message analysis” (Esquivel & Flanagan, 2007). Narrative assessments take this philosophical view to emphasize an individual’s subjective interpretation as a vital part of the ongoing process of how individuals understand past and present experiences. This influences relations to social narratives, culture, and identity.

The application of narrative psychological assessment is directly linked to the personological paradigm of assessment, commonly found in personality assessment practices. According to this perspective, personality is formed during adolescence through identity and schema development. Using narrative assessments, school psychologists gain insight into specific areas of a student’s functioning and resilience. These assessments provide a framework for future assessments or interventions that consider situational and sociocultural contexts.

The use of narrative assessments should also be done with consideration of privacy and the referral question. Due to the indirect nature of narrative assessments, students may find they reveal information they would not have purposefully told the examiner in other circumstances (Knauss, 2001). Therefore, a narrative assessment must be based on a clear rationale that links explicitly to the question being addressed in the assessment report.

Examples of narrative assessments include the Thematic Apperception Technique (TAT), Tell-Me-A-Story (TEMAS), or the Children’s Apperception Test (CAT), which provides stimuli for children to produce stories that reflect fantasy and identity through the use of storytelling. The most comprehensive framework for narrative assessments and interpretation was provided by Teglasi (2001). These projective tests can help answer referral questions (Knauss, 2001), although they should never be used in isolation.

Strength-Based Assessments

Research demonstrates that, even during difficult circumstances, children and adolescents can recognize positive things about themselves and their environment (Bozic, 2013). Drawing on theoretical roots such as positive psychology and resiliency theory (Bozic et al., 2018), strength-based assessments allow unique resilience factors to take equal weight to struggles that may have led to the referral. Strength-based assessments typically measure emotional and behavioural skills, competencies, and characteristics that allow students to develop a sense of personal competence, allow satisfying relationships, enhance the ability to deal with stress or adversity and promote development across all domains (Rhee et al., 2001).

While traditional assessments may operate under the assumption that there are elements of a student’s functioning beyond their control, strength-based assessments allow for the integration of children’s efficacy. In these assessments, a child or adolescent’s strengths are often equated with protective factors that allow children to develop healthy coping skills and social support. Strength-based assessments allow for the elements of intrinsic motivation, dynamic exchanges resulting from real-world intricacies, and a temporal arc to be brought to the forefront of the referring question (Rhee et al., 2001). It also allows for an ecological perspective to be prioritized over a numerical score on an assessment. Some argue that using strength-based assessments will enable clients to display higher levels of engagement and cooperation in treatment plans (Bozic, 2013), potentially due to using solution-focused techniques that are intrinsically embedded into the strength-based perspective. School psychologists recognize the importance of assessing strengths, preferring assessments that highlight students’ motivation, self-efficacy, and self-regulation (Cleary, 2009).

Several assessment instruments investigate children and adolescents’ strengths, including the Behavioral Emotional Rating Scale Second Edition (BERS-2; Buckley & Epstein, 2004), California Healthy Kids Survey – Resilience Module (CHKS; Furlong et al., 2009), the Social-Emotional-Assets and Resilience Scales (SEARS; Merrell et al., 2011), the Context of Strength Finder (CSF; Bozic et al., 2018) and the Strengths and Difficulties Questionnaire (SDQ; Goodman, 1999).

Ethical Assessment Practices

School psychologists face several ethical issues when providing assessment services to students and teachers. These issues often pertain to difficulties that arise from school psychologists having to cater equally to the diverse needs of students, parents, teachers, and administrators (Knauss, 2001). The diverse nature of clients accessing assessment reports can create ethical dilemmas for the practicing school psychologist. While the complexity and demand for assessments delivered by school psychologists continue to grow, so does the likelihood of assessment-related ethical transgressions. While the following section will provide some insight into this topic, school psychologists must engage in protective practices to center ethical considerations. This can take many forms, such as consciously building their knowledge as new research comes out, consulting existing ethical standards, and referring to colleagues who have specialization in particular areas. Additionally, continued supervision practices offer a viable vehicle for maintaining ethical assessment competencies (Crespi & Dube, 2006).

Informed Consent

Informed consent is one of the most frequent ethical issues confronting school psychologists while conducting assessments (Knauss, 2001). Informed consent includes agreements between the psychologist and the client that outline the reasons for assessment, types of evaluations used, what will happen with the results, and who has access to the results (including teachers, administration, or other medical professionals). In Canada, all guardians are required to provide informed consent for any youth below the age of 18 prior to any assessment or procedure that is not initially agreed on when the guardian registers their child in school (Perfect & Morris, 2011) unless a court or legal mandate requests the assessment.

Informed consent encompasses many elements, including acknowledgement of why the assessment is being conducted, what the assessment will entail, confidentiality, limits to confidentiality, anticipated benefits and risks, consequences of non-action, right to withdraw or rescind consent, and information on fees (when applicable). In obtaining informed consent, providing detailed information to the student and their custodial guardian is vital. For example, it is not sufficient to give the name of a test without including an explanation of the nature and purpose of the assessment instrument.

The informed consent process establishes a relationship between the psychologist and the client, making it one of the most essential elements of the assessment process. Furthermore, Knauss (2001) noted that informed consent should be presented to the parent, guardian, or student in their native language. This is consistent with the belief that informed consent acts as a building block for psychological assessment, offering the psychologists an opportunity to build trust in both the guardian and the child.

While students under the age of majority cannot provide written consent to an assessment in their own right, they should be able to provide informed assent. Legally, assent does not have any standing. However, obtaining assent from adolescents engaged in psychological assessments facilitates a higher degree of cooperation, making the assessment results more reliable. By asking the youth to provide assent, the psychologist is also allowing them to ask questions, provide insight, and demonstrate personal autonomy (Miller et al., 2004). When communicating this information, it must be done in a language the student comprehends. This means excessive jargon or acronyms should be avoided, and the psychologist should ensure complete understanding before proceeding.

Confidentiality

Confidentiality is seen by many as the hallmark of mental health services. However, school psychologists have an even more complicated relationship with the limits to confidentially. Before any assessment or clinical practice can occur, new school psychologists must ensure they understand the policies set in place by the school district in which they practice. Questions to ask the administration include establishing the confidentiality of students during electronic communications, ability to access previous school or medical records, and individuals who have access to a student’s education and medical records (Perfect & Morris, 2011). School psychologist have a duty to inform students and parents of any limits to confidentiality that may arise. For example, when conducting an assessment, it is important to inform the student that their parents will be able to view their assessment results.

Furthermore, suppose assessments reveal a child is engaging in risky behaviour. In that case, it may require the psychologist to break confidentiality if that behaviour presents a clear danger to the child or others (Rae et al., 2009). Examples of risky behaviour that may result in breaking confidentiality include risky sexual behaviour, suicidal ideations, drug or alcohol misuse, maltreatment, or a persistent pattern of increased duration or frequency of these behaviours. School psychologists are likely to break confidentiality if the behaviours occur within the school setting (Rae et al., 2009). Ensuring that guardians and students understand the limits to confidentiality helps to provide a foundation of trust and allows the client to engage in the assessment more confidently.

Communicating Findings

A common limitation reported by school psychologists is that parents lack the willingness to engage in the psychological services offered, making it challenging to implement efficient and effective treatment plans (Ohan et al., 2015). Research by Griffin and Christie (2008) found that the referrers (e.g., teachers and administration) often suggest assessments rather than the child or custodial guardians. This may be problematic, as it may indicate that some families, particularly those who have not had positive relationships with psychological support, are unwilling to implement suggested treatments.

Research has long pointed to the increasing need to engage children’s guardians in the assessment process to enhance the effectiveness of interventions (Sheridan & Kratochwill,1992). This, however, requires communication with guardians that is transparent and respectful of the unique knowledge the family provides, along with the evident absence of blame (Jacobs, 2012). To facilitate reflective collaboration in communicating with guardians, strategies need to address possible structural barriers and relational challenges. Structural factors include elements such as conflicting schedules or difficulty with transportation.

In contrast, relational factors include a lack of trust between the relevant parties and histories of low effort from the administration to engage parents in the educational process. Additionally, it is essential to ensure that early conversations with guardians seek to remove barriers to accessing psychological support. This includes acknowledging and addressing potential fears of stigma, fear around labels, lack of trust, and persistent beliefs that the school setting cannot accommodate a child’s unique treatment plan.

Cultural Competency

The importance of cultural competence is deeply embedded in ethical standards and evidence-based practices. As Canada continues to become more culturally diverse, so do the student population and those seeking psychological assessment. Historical controversy surrounds the use of assessments to diagnose intellectual or cognitive abilities in diverse groups (Sotelo-Dynega & Dixon, 2014). While modern psychologists are ethically obligated to select assessment procedures that are not racially or culturally discriminating (Knauss, 2001), there are still several considerations for the psychologist when selecting culturally sensitive assessments. For example, while intelligence or cognitive assessments can provide valuable information, the validity of results may be severely hampered or compromised by cultural factors unrelated to intelligence. It may be that language-minority students are assessed on language comprehension instead of capturing intelligence or deficits (Bainter & Tollefson, 2003).

Without robust considerations, cultural differences may hinder optimal performance and unduly influence recommendations. Studies consistently show that students belonging to marginalized groups are more likely to be labelled with emotional disturbance or intellectual difficulties (Sullivan & Proctor, 2015). When viewed critically, these studies demonstrate the danger of attributing emotions, learning processes, or behaviours without examining them against educational disparities, cultural differences, and personal bias.

Although assessments may be valid if the student predominately speaks English, dominance in a language does not guarantee mastery. Whenever possible, students should have assessments presented in their native language. It is, however, essential to consider if the translated assessment is standardized to that language. Although research sometimes uses direct translations of assessments, this tactic is often unsatisfactory. Direct translations of assessments may shift the meanings and difficulty of the assessment, fail to consider how concepts are interpreted in the new language or still include cultural factors that influence the comprehension of the items (Bainter & Tollefson, 2003). Although frequently used, assessment results gained from direct translations of existing test batteries are susceptible to the Ad Populum Fallacy, or the erroneous belief that because something is widely used, it is effective and reliable (Kranzler et al., 2020). Suppose there is no assessment normed in the student’s native language. In that case, psychologists must take this into account when constructing their final report, explicitly noting the limitations of their findings.

An alternative approach for language minority students is the use of nonverbal tests. Nonverbal assessments traditionally examine a narrow range of cognitive abilities (Bainter & Tollefson, 2003). However, they may provide a substitute method for assessing students. Considerations for Diagnosing

The line between diagnostically significant presentations and non-significant presentations has historically been thin. Children especially are challenging to diagnose for several reasons: they are still cognitively developing, childhood development varies widely, and children are reactive to their environments (Cartwright et al., 2017). Additionally, like adults, healthy and unhealthy development in children is bound by cultural and social norms (Silk et al., 2001), which may differ from those of the assessing psychologist.

Although applying a diagnostic code to a child may allow them to access more significant levels of support, it also may contribute to difficulty arising from stigma or parental concern (Cartwright et al., 2017). Regardless of whether a child receives a diagnosis, there are next steps that should be taken. The entire decision-making process must be documented as a part of the final assessment report, and follow-up processes must occur to monitor changes in children’s functioning (Sattler, 2001).

Despite the need for progress monitoring, school psychologists report spending 50% to 75% of their time with children in elementary years (Jordan et al., 2009), indicating that follow-up procedures are currently limited by caseload and time constraints. As assessment reports and diagnoses determine treatment interventions, continued monitoring is the most ethical way to ensure that initial findings accurately reflect the child’s needs. Documentation allows the client to increase options for effective continuity of care, communicate with other professionals, challenge previous assessments that do not fit the child’s internal world, and demonstrate the psychologist’s compliance with ethical and legal standards.

Anatomy of an Assessment Report

The assessment report is the end product of assessments conducted by school psychologists. It is the integration of background information, interviews, observations, results of test batteries, informal procedures (e.g., file reviews, data from teachers), differential diagnoses based on the DSM-5 or the ICD-11, and recommendations. Translating the results of assessments into a comprehensive written report remains a significant duty for school psychologists. This process results in documents that are useful and understandable to the average reader (e.g., parents and teachers who may not possess specialized psychological knowledge of assessments). Information in these reports addresses various areas, such as the referral question, the child’s strengths and weaknesses, educational needs, and realistic recommendations for parents, administration, and classrooms. Clarity, readability, ethical and legal standards, objective and jargon-free language, and document organization are all top priorities. Recent research conducted by Rahill (2018) indicates that parents and teachers prefer psychological assessment reports to be written with information that assists in understanding the child and that psychological jargon or an overemphasis on test scores impedes this understanding. Psychologists need to keep two things in mind as they write their reports; first, the information provided by an assessment report can have massive impacts on the clients and their families, and second, accurate assessments require the integration of data (Fletcher et al., 2015).

Readability in an assessment report can be increased through concise sentences, approachable vocabulary, active verbs, and avoiding acronyms (Fletcher et al., 2015). Teachers and parents prefer integrated reports that share findings through theme-based conclusions. Integrated reports can be contrasted to more traditional report styles that share findings in a test-by-test fashion, often failing to make connections between assessment tools (Fletcher et al., 2015).

Typical assessment reports contain the student’s personal history, referral questions, procedures and measures used, scores on tests administered, interpretation of findings, conclusions, diagnostic impressions, and recommendations for strategies and interventions (Fletcher et al., 2015).

StrengthBased Reports

Modern school psychologists endorse a strength-based perspective when constructing their reports (Rhee et al., 2001). Strengths are commonly incorporated into reports as a part of the intervention strategy (Bozic, 2013). However, the process of identifying student strengths can occur from the first point of contact and are an essential part of the initial referral. Students are often referred for assessment due to the assumption of disability or deficits (Rhee et al., 2001), yet extending assumptions to include inherent strengths may lead students and parents to be more willing to adopt a treatment plan. Strengths can therefore be viewed as protective and motivating factors to be highlighted in assessment findings. By applying a strength-based lens to assessment reports, psychologists can integrate unique information facilitating a comprehensive intervention plan, as well as increasing the likelihood that clients will be motivated to implement the recommendations.

Debriefing

Historically, the debriefing process was minimal as psychologists maintained that assessment results were too complex for clients to understand (Tharinger et al., 2008). However, debriefing allows the psychologist to communicate their findings in lay terms. School psychologists also meet with the student, family, and school team to whom the results of an assessment will impact. As the main reason for debriefing is to allow all parties involved to understand the picture that the assessment has illuminated, the findings must be shared in language the client understands. This may include the use of culturally-specific language, accessible language, and the avoidance of jargon (Tharinger et al., 2008). Additionally, assessment findings must be delivered honestly (e.g., taking care not to omit bad news) and empathetically (Kamphaus & Frick, 2005).

Research supports the efficacy of debriefing using a staged approach. First, the reasons for the assessment are reviewed, and general feedback is shared according to how well the recipient understands and is open to receiving it, followed by communication about specific findings, and finally, a summarization of major findings and recommendations (Tharinger et al., 2008). The child and family can separate from the school team to process the findings before being brought together to ensure an effective working relationship. Integrating individual debriefings allows families and teachers to ask sensitive questions, engage in the recommendations, and process emotional reactions without feeling that an audience is present. Effective debriefings should increase confidence in the psychologist, the school team, and the assessment process for all parties involved.

Learning Check

[chapter 6 HP5 Activity]

Conclusion

Assessments form a substantial portion of the workload for school psychologists, which is unlikely to change soon. Considering the life-long implications for students who receive a developmental or behavioural diagnosis, school psychologists must view assessments as a core component of their practice. As the development of new assessment tools is a constant theme in psychology, it is additionally crucial for practicing psychologists to engage in deliberate and critical analysis of the test batteries they use, examining cultural bias and the potential for unreliable results.

References

Alfonso, V. C., LaRocca, R., Oakland, T. D., & Spanakos, A. (2000). The course on individual cognitive assessment. School Psychology Review, 29(1), 52. https://doi.org/10.1080/02796015.2000.12085997

Bainter, T.R. and Tollefson, N. (2003). Intellectual assessment of language minority students: What do school psychologists believe are acceptable practices? Psychol. Schs., 40, 599-603. https://doi.org/10.1002/pits.10131

Barnhill, G. P. (2005). Functional behavioral assessment in schools. Intervention in School & Clinic, 40(3), 131–143. https://doi.org/10.1177/10534512050400030101

Benson, N. F., Floyd, R. G., Kranzler, J. H., Eckert, T. L., Fefer, S. A., & Morgan, G. B. (2019). Test use and assessment practices of school psychologists in the United States: Findings from the 2017 National Survey. Journal of School Psychology, 72, 29–48. https://doi.org/10.1016/j.jsp.2018.12.004

Boake, C. (2002). From the Binet–Simon to the Wechsler–Bellevue: Tracing the history of intelligence testing. Journal of Clinical and Experimental Neuropsychology, 24(3), 383–405. https://doi.org/10.1076/jcen.24.3.383.981

Bozic, N. (2013). Developing a strength-based approach to educational psychology practice: A multiple case study. Educational & Child Psychology, 30(4), 18–29.

Bozic, N., Lawthom, R., & Murray, J. (2018). Exploring the context of strengths–A new approach to strength-based assessment. Educational Psychology in Practice, 34(1), 26–40.

Buckley, J.A. & Epstein, M.H. (2004). The Behavioral and Emotional Rating Scale-2 (BERS-2): Providing a comprehensive approach to strength-based assessment. Contemporary School Psychology, 9, 21.

Burke, J. P., & DeMers, S. T. (1979). A paradigm for evaluating assessment interviewing techniques. Psychology in the Schools, 16(1), 51–60.

Caemmerer, J. M., Keith, T. Z., & Reynolds, M. R. (2020). Beyond individual intelligence tests: Application of Cattell-Horn-Carroll theory. Intelligence, 79. https://doi.org/10.1016/j.intell.2020.101433

Canadian Psychological Association. Considering a career as a school psychologist in Canada? Role, Training and Prospects (n.d.). Retrieved October 05, 2022, from https://cpa.ca/docs/File/Sections/EDsection/School%20Psychology%20in%20Canada%20-%20Roles,%20Training,%20and%20Prospects.pdf

Canadian Psychological Association. Professional Practice Guidelines for School Psychologists in Canada (2007). http://www.cpa.ca/cpasite/userfiles/Documents/publications/CPA%20Practice%20Guide. pdf

Canivez, G. L., & Youngstrom, E. A. (2019). Challenges to the Cattell-Horn-Carroll Theory: Empirical, clinical, and policy implications. Applied Measurement in Education, 32(3), 232–248. https://doi.org/10.1080/08957347.2019.1619562

Cartwright, J., Lasser, J., & Gottlieb, M. C. (2017). To code or not to code: Some ethical conflicts in diagnosing children. Practice Innovations, 2(4), 195–206. https://doi.org/10.1037/pri0000053

Chafouleas, S. M., Volpe, R. J., Gresham, F. M., & Cook, C. R. (2010). School-based behavioral assessment within problem-solving models: Current status and future directions. School Psychology Review, 39(3), 343–349. https://doi.org/10.1080/02796015.2010.12087756

Cleary, T. J. (2009). School-based motivation and self-regulation assessments: An examination of school psychologist beliefs and practices. Journal of Applied School Psychology, 25(1), 71–94. https://doi.org/10.1080/15377900802484190

Cohen, R. J., Tobin Renée Margaret, & Schneider, W. J. (2022). Psychological testing and assessment: An introduction to tests and measurement. McGraw Hill LLC.

Conners, K. (2008). A teacher rating scale for use in drug studies with children. American Journal of Psychiatry, 126, 884–888.

Corkum, P., French, F., & Dorey, H. (2007). School psychology in Nova Scotia. Canadian Journal of School Psychology, 22(1), 108–120. https://doi.org/10.1177/0829573507301121

Crespi, T. D., & Dube, J. M. B. (2006) Clinical supervision in school psychology, The Clinical Supervisor, 24:1-2, 115-135, https://doi.org/10.1300/J001v24n01_06

Dombrowski, S. C., & Watkins, M. W. (2013). Exploratory and higher order factor analysis of the WJ–III full test battery: A school aged analysis. Psychological Assessment, 25, 442455. doi:10.1037/a0031335

Eberta, K. D., & Scotta, C. M. (2014). Relationships Between Narrative Language Samples and Norm-Referenced Test Scores in Language Assessments of School-Age Children. Language, Speech & Hearing Services in Schools, 45(4), 337–350. https://doi.org/10.1044/2014_LSHSS-14-0034

Elliott, C. D. (200t). Differential Ability Scales (2nd ed.). San Antonio, TX: Harcourt Assessment.

Farrell, P. (2010). School Psychology: Learning Lessons from History and Moving Forward. School Psychology International, 31(6), 581–598.

Fiorello, C. A., Thurman, S. K., Zavertnik, J., Sher, R., & Coleman, S. (2009). A comparison of teachers’ and school psychologists’ perceptions of the importance of CHC abilities in the classroom. Psychology in the Schools, 46(6), 489–500. https://doi.org/10.1002/pits.20392

Fletcher, J., Hawkins, T., & Thornton, J. (2015). What makes an effective psychoeducational report? Perceptions of teachers and psychologists. Journal of Psychologists and Counsellors in Schools, 25(1), 38–54. https://doi.org/10.1017/jgc.2014.25

Furlong, M. J., Ritchey, K. M., & O’Brennan, L. M. (2009). Developing norms for the California Resilience Youth Development Module: Internal assets and school resources subscales. California School Psychologist, 14, 35–46. https://doi.org/10.1007/BF03340949

Garbacz, A., Godfrey, E., Rowe, D. A., & Kittelman, A. (2022). Increasing parent collaboration in the implementation of effective practices. Teaching Exceptional Children, 54(5), 324–327. https://doi.org/10.1177/00400599221096974

Goodman, R. (1999). The extended version of the Strengths and Difficulties Questionnaire as a guide to child psychiatric caseness and consequent burden. Journal of Child Psychology & Psychiatry & Allied Disciplines, 40(5), 791. https://doi.org/10.1111/1469-7610.00494

Greathouse, D., & Shaughnessy, M. F. (2010, June). An interview with Jerome Sattler . Retrieved October 18, 2022, from https://www.researchgate.net/publication/233756610_An_Interview_with_Jerome_Sattler

Griffin, A., & Christie, D. (2008). Taking a systemic perspective on cognitive assessments and reports: Reflections of a paediatric and adolescent psychology service. Clinical Child Psychology and Psychiatry13(2), 209–219. https://doi.org/10.1177/1359104507088343

Hammill, D.D., Pearson, N.A., & Wiederholt, J.L. (1997). Examiners manual: Comprehensive Test of Nonverbal Intelligence. Austin, TX: Pro-Ed.

Harris, L. R. & Brown, G. T. L. (2010). Mixing interview and questionnaire methods: Practical problems in aligning data. Practical Assessment, Research, and Evaluation, 15(1). https://doi.org/10.7275/959j-ky83

Jacobs, L. (2012). Assessment as consultation: Working with parents and teachers. Journal of Infant, Child & Adolescent Psychotherapy, 11(3), 257–271. https://doi.org/10.1080/15289168.2012.701134

Jordan, J. J., Hindes, Y. L., & Saklofske, D. H. (2009). School psychology in Canada. Canadian Journal of School Psychology, 24(3), 245–264. https://doi.org/10.1177/0829573509338614

Kamphaus, R. W., & Frick, P. J. (2005). Clinical assessment of child and adolescent personality and behavior (2nd ed.). Boston: Allyn & Bacon.

Kaufman, A. S., Flanagan, D. P., Alfonso, V. C., & Mascolo, J. T. (2006). Test review: Wechsler, D. (2003). “Wechsler Intelligence Scale for Children, Fourth Edition (WISC-IV).” San Antonio, TX: The psychological corporation. Journal of Psychoeducational Assessment, 24(3), 278–295.

Knauss, L. K. (2001). Ethical issues in psychological assessment in school settings. Journal of Personality Assessment, 77(2), 231–241. https://doi.org/10.1207/S15327752JPA7702_06

Koegel, L. K., Koegel, R. L., Ashbaugh, K., & Bradshaw, J. (2014). The importance of early identification and intervention for children with or at risk for autism spectrum disorders. International Journal of Speech-Language Pathology, 16(1), 50–56. https://doi.org/10.3109/17549507.2013.861511

Kozey, M., & Siegel, L. S. (2008). Definitions of learning disabilities in Canadian provinces and territories. Canadian Psychology / Psychologie Canadienne, 49(2), 162–171. https://doi.org/10.1037/0708-5591.49.2.162

Kranzler, J. H., Maki, K. E., Benson, N. F., Eckert, T. L., Floyd, R. G., & Fefer, S. A. (2020). How do school psychologists interpret intelligence tests for the identification of specific learning disabilities? Contemporary School Psychology, 24(4), 445–456.

Krug, D. A., Arick, J., & Almond, P. (1980). Behavior checklist for identifying severely handicapped individuals with high levels of autistic behaviours. Journal of Child Psychology and Psychiatry, 21, 221–229.

Lee, K., Graves, S. L., Jr., & Bumpus, E. (2022). Competency standards in cognitive assessment course assignments: A national analysis of school psychology syllabi. Training and Education in Professional Psychology. https://doi.org/10.1037/tep0000409

Lockwood, A. B., Benson, N., Farmer, R. L., & Klatka, K. (2022). Test use and assessment practices of school psychology training programs: Findings from a 2020 survey of US Faculty. Psychology in the Schools, 59(4), 698–725. https://doi.org/10.1002/pits.22639

Lok, B., McNaught, C., & Young, K. (2016). Criterion-referenced and norm-referenced assessments: compatibility and complementarity. Assessment & Evaluation in Higher Education, 41(3), 450–465. https://doi.org/10.1080/02602938.2015.1022136

Machek, G. R., & Nelson, J. M. (2010). School psychologists’ perceptions regarding the practice of identifying reading disabilities: Cognitive assessment and response to intervention considerations. Psychology in the Schools, 47(3), 230–245. https://doi.org/10.1002/pits.20467

McCoy, D. C. (2019). Measuring Young Children’s Executive Function and Self-Regulation in Classrooms and Other Real-World Settings. Clinical Child & Family Psychology Review, 22(1), 63–74. https://doi.org/10.1007/s10567-019-00285-1

McGill, R. J., & Dombrowski, S. C. (2019). Critically reflecting on the origins, evolution, and impact of the Cattell-Horn-Carroll (CHC) Model. Applied Measurement in Education, 32, 216- 231. doi: 10.1080/08957347.2019.1619561

McNamara, K. M., Walcott, C. M., & Hyson, D. (2019). Results from the NASP 2015 membership survey, part two: Professional practices in school psychology. National Association of School Psychologists. https://www.nasponline. org/Documents/Research%20and%20Policy/Research%20Center/NRR_Mem_Survey_2015_McNamara_Walcott_ Hyson_2019.pdf

Merenda, P. F. (1996). BASC: Behavior Assessment System for Children. Measurement & Evaluation in Counseling & Development, 28(4), 229.

Merrell, K. W., Cohn, B. P., & Tom, K. M. (2011). Development and validation of a teacher report measure for assessing social-emotional strengths of children and adolescents. School Psychology Review, 40(2), 226–241. https://doi.org/10.1080/02796015.2011.12087714

Michell J. (2012). Alfred Binet and the concept of heterogeneous orders. Frontiers in psychology, 3, 261. https://doi.org/10.3389/fpsyg.2012.00261

Miller, L. T., Bumpus, E. C., & Graves, S. L. (2021). The state of cognitive assessment training in school psychology: An analysis of syllabi. Contemporary School Psychology, 25(2), 149-156. https://link.springer.com/article/10.1007/s40688-020-00305-w

Miller, V. A., Drotar, D., & Kodish, E. (2004). Children’s competence for assent and consent: A review of empirical findings. Ethics & Behavior, 14(3), 255–295. https://doi.org/10.1207/s15327019eb1403_3

Ohan, J. L., Seward, R. J., Stallman, H. M., Bayliss, D. M., & Sanders, M. R. (2015). Parents’ barriers to using school psychology services for their child’s mental health problems. School Mental Health: A Multidisciplinary Research and Practice Journal, 7(4), 287–297. https://doi.org/10.1007/s12310-015-9152-1

Perfect, M. M., & Morris, R. J. (2011). Delivering School-Based Mental Health Services by School Psychologists: Education, Training, and Ethical Issues. Psychology in the Schools, 48(10), 1049–1063.

Preston, J. P., & Claypool, T. R. (2021). Analyzing assessment practices for Indigenous students. Frontiers in Education, 6. https://www.frontiersin.org/articles/10.3389/feduc.2021.679972/full

Rahill, S. A. (2018). Parent and teacher satisfaction with school‐based psychological reports. Psychology in the Schools, 55(6), 693–706. https://doi.org/10.1002/pits.22126

Reynolds, C. R., Vannest, K. J., & Fletcher-Janzen, E. (Eds.). (2013). Encyclopedia of special education. Retrieved October 19, 2022, from https://onlinelibrary.wiley.com/doi/book/10.1002/9781118660584

Rhee, S., Furlong, M. J., Turner, J. A., & Harari, I. (2001). Integrating strength-based perspectives in psychoeducational evaluations. California School Psychologist, 6, 5–17. https://doi.org/10.1007/BF03340879

Ritchie, T., Rogers, M., & Ford, L. (2021). Impact of covid-19 on school psychology practices in Canada. Canadian Journal of School Psychology, 36(4), 358–375. https://doi.org/10.1177/08295735211039738

Sattler, J. M. (2001). Assessment of Children: Cognitive Applications. San Diego, CA: Jerome M. Sattler Publisher.

Schneider, L. H., Pawluk, E. J., Milosevic, I., Shnaider, P., Rowa, K., Antony, M. M., Musielak, N., & McCabe, R. E. (2022). The Diagnostic Assessment Research Tool in action: A preliminary evaluation of a semistructured diagnostic interview for DSM-5 disorders. Psychological Assessment, 34(1), 21–29. https://doi.org/10.1037/pas0001059

Serpico D. (2021). The cyclical return of the IQ controversy: Revisiting the lessons of the resolution on genetics, race and intelligence. Journal of the history of biology, 54(2), 199–228. https://doi.org/10.1007/s10739-021-09637-6

Sheridan, S. M., & Kratochwill, T. R. (1992). Behavioral parent-teacher consultation: Conceptual and research considerations. Journal of School Psychology, 30(2), 117–139. https://doi.org/10.1016/0022-4405(92)90025-Z

Silk, J. S., Nath, S. R., Siegel, L. R., & Kendall, P. C. (2000). Conceptualizing mental disorders in children: Where have we been and where are we going? Development and Psychopathology, 12(4), 713–735. https://doi.org/10.1017/S0954579400004090

Sotelo, D. M., & Dixon, S. G. (2014). Cognitive assessment practices: A survey of school psychologists. Psychology in the Schools, 51(10), 1031–1045.

Stifel, S. W., Feinberg, D. K., Zhang, Y., Chan, M.-K., & Wagle, R. (2020). Assessment during the COVID-19 pandemic: Ethical, legal, and safety considerations moving forward. School Psychology Review, 49(4), 438–452. https://doi.org/10.1080/2372966x.2020.1844549

Sullivan, A. L., & Proctor, S. L. (2016). The shield or the sword? Revisiting the debate on racial disproportionality in special education and implications for school psychologists. School Psychology Forum, 10(3), 278–288.

Sparrow S. S., Cicchetti D. V. & Saulnier, C. A. (2016) Vineland Adaptive Behaviour Scales. American Guidance Service, Minneapolis.

Teglasi, H. (2001). Essentials of TAT and other story telling assessment measures. New York: Wiley.

Tharinger, D. J., Finn, S. E., Hersh, B., Wilkinson, A., Christopher, G. B., & Tran, A. (2008). Assessment Feedback With Parents and Preadolescent Children: A Collaborative Approach. Professional Psychology: Research & Practice, 39(6), 600–609. https://doi.org/10.1037/0735-7028.39.6.600

Thibodeau, N. R. B., White, R. E., & Palermo, F. (2021). Advancements in observing behavior in preschool classrooms: A new audiovisual approach. Social Development, 30(4), 899–909. https://doi.org/10.1111/sode.12543

Thompson, T., Coleman, J. M., Riley, K., Snider, L. A., Howard, L. J., Sansone, S. M., & Hessle, D. (2018). Standardized assessment accommodations for individuals with intellectual disability. Contemporary School Psychology, 22(4), 443-457. https://link.springer.com/article/10.1007/s40688-018-0171-4

Wechsler, D. (2015). Wechsler intelligence scale for children-fifth edition. San Antonio, TX: Psychological Corporation.

Weiner, Y., Shernoff, E. S., & Kettler, R. J. (2021). A survey of newly enrolled school psychology trainees: Estimates of key role and function. Psychology in the Schools, 58(7), 1209–1224. https://doi.org/10.1002/pits.22499

Wilson, S.M. (2011). Stanford-Binet Intelligence Scales. In: Goldstein, S., Naglieri, J.A. (eds) Encyclopedia of Child Behavior and Development. Springer, Boston, MA. https://doi.org/10.1007/978-0-387-79061-9_2783

Wodrich, D. L., Spencer, M. L. S., & Daley, K. B. (2006). Combining RTI and psychoeducational assessment: What we must assume to do otherwise. Psychology in the Schools, 43(7), 797–806. https://doi.org/10.1002/pits.20189

Woodcock, R.W., McGrew, K.S., & Mather, N. (2015). Woodcock-Johnson IV Tests of Cognitive Abilities. Itasca, IL: Riverside Publishing.

License

Icon for the Creative Commons Attribution-NoDerivatives 4.0 International License

Foundations in School Psychology Copyright © 2023 by Conor Barker is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License, except where otherwise noted.

Share This Book