ssprc.org

ssprc.orgssprc.orgssprc.org

ssprc.org

ssprc.orgssprc.orgssprc.org
  • Home
  • People
  • Projects/Studies
  • Resources
  • Publications
    • Journal Publications
    • Presentations
    • Grants
    • Forms
  • Contact Us/Apply
  • More
    • Home
    • People
    • Projects/Studies
    • Resources
    • Publications
      • Journal Publications
      • Presentations
      • Grants
      • Forms
    • Contact Us/Apply
  • Home
  • People
  • Projects/Studies
  • Resources
  • Publications
    • Journal Publications
    • Presentations
    • Grants
    • Forms
  • Contact Us/Apply

Ongoing Projects & Studies

On-going Validation of the Direct Behavior Rating-Classroom Management (DBR-CM)

On-going Validation of the Direct Behavior Rating-Classroom Management (DBR-CM)

On-going Validation of the Direct Behavior Rating-Classroom Management (DBR-CM)

Teacher in classroom

Despite increased scholarly attention on classroom management, classroom management assessment, and consultation-based professional development activities, the availability of feasible, flexible and defensible classroom management assessments remains limited (Reddy, Fabiano, & Jimerson, 2013). Given the reported deficits in pre- and in-service professional development in conjunction with a growing emphasis on prevention within a tiered service delivery approach, coaches, consultants, and trainers are increasingly being called upon to support improvement in educator classroom management practice. Use of performance feedback and coaching incorporating screening and formative assessment data imbedded within a collaborative consultation framework attempt to shift professional development goals away from awareness-raising to behavior change (Mitchell et al., 2017; Reinke et al., 2008; Simonsen et al., 2017). The DBR-CM was developed to address the limited availability of feasible classroom management assessments.


Find Out More

PST 1-2-3: Efficient, Effective School-Based Problem-Solving Teams

On-going Validation of the Direct Behavior Rating-Classroom Management (DBR-CM)

On-going Validation of the Direct Behavior Rating-Classroom Management (DBR-CM)

PST 123

PST 1-2-3 provides explicit yet flexible guidance to educators as they engage in team-based problem-solving and data-based decision-making activities to address challenges facing students, staff, classrooms, buildings, and districts within tiered service delivery models. PST 1-2-3 draws from core problem-solving components, behavioral consultation processes, and practical, real-world experiences to address frequent barriers to ineffectiveness and inefficient SB PST. PST 1-2-3 uses structured agendas, repetition and consistency, time limits, and deemphasis of prior training or experience to address barriers to success such as resistance, inconsistency, poor training, and lack of knowledge or skills relative to intervention, supports, data-collection, or problem-solving. Since initial pilot implementation in 2008, PST 1-2-3 has been refined through incorporating additional research and theory, feedback from users, and outcomes for students. The effectiveness of PST 1-2-3 is rooted in its unique features. PST 1-2-3 requires little training for most participants, it uses a consistent 3-meeting cycle, and activities are driven by simple, structured, time-bound agendas.  


Find Out More

Evaluating School-Based Problem-Solving Teams

On-going Validation of the Direct Behavior Rating-Classroom Management (DBR-CM)

Evaluating Consultation-Derived Intervention Quality: The Brief Measure of Intervention Quality (BMI

Problem Solving Team

School-Based Problem-Solving Teams (SB PST), in their varied forms and names, have played an important role in the progression of tiered service delivery models over time. Early forms of SB PST appeared as the first attempts to intervene in student difficulties outside of special education services (Burns & Symington, 2002). As their adoption and use have evolved, educators have relied heavily on these teams to execute components of a problem-solving process that underlie tiered service delivery models (e.g. RtI/MTSS). Despite widespread use of SB PST, the make up, processes, and outcomes of these teams has received relatively little scholarly attention (Burns & Symington, 2002; Hargreaves & Fullan, 2012; Rosenfield, Newell, Zwolski, & Benishek, 2018).  The little evidence available suggests that in the absence of explicit guidance and supervision of university sanctioned research efforts, the effectiveness of SB PSTs drop dramatically (Burns, Peters, & Noell, 2008). In order to improve the systematic and successful implementation of SB PST, current practices must be assessed and areas of need identified. This study served as an initial step in exploring current problem-solving team practices from the perspective of key school personnel. Patterns in educator reports related to implementation, processes, make up, and outcomes will be presented along with reported barriers to successful problem-solving team implementation. More specifically, this study investigated reports of implementation characteristics including team make up, member roles, and procedures as well as of targeted outcomes of SB PST across a Southeastern state. 

Evaluating Consultation-Derived Intervention Quality: The Brief Measure of Intervention Quality (BMI

Evaluating Consultation-Derived Intervention Quality: The Brief Measure of Intervention Quality (BMI

Evaluating Consultation-Derived Intervention Quality: The Brief Measure of Intervention Quality (BMI

  The NASP Practice Model identifies consultation as a professional practice domain for problem-solving and the delivery of evidence-based interventions (NASP, 2020). School psychologists frequently engage in consultation activities, collaborating with consultees (e.g., teachers) to define a socially significant problem, analyze the problem, develop a plan (i.e., an intervention) to address the problem, and establish a monitoring system to evaluate the intervention (Kratochwill et al., 2008). Although consultation continues to function as a major mechanism for delivering interventions to children and adolescents in school settings, there remains a lack of available tools for evaluating the quality of interventions selected or generated within the consultative process. The implications of this shortcoming extend to practice and intervention research where, without knowledge of intervention quality, faulty conclusions regarding the effectiveness of consultation may result. To address this problem, the Brief Measure of Intervention Quality (BMIQ) was developed for the purpose of evaluating the quality of consultation-derived interventions.

Find out more

Beyond Assessment: Direct Behavior Rating-Self Monitoring (DBR-SM)

Evaluating Consultation-Derived Intervention Quality: The Brief Measure of Intervention Quality (BMI

Continued Validation and Refinement of the User Rating Profile - Web Resource (URP-WR)

  In the last decade, teachers have consistently reported student behaviors as a top concern in their classrooms, while also reporting deficits in their pre-service classroom and behavior management training (Reinke et al., 2011). A classroom environment characterized by frequent disruptions and inappropriate behaviors prevents students from learning and teachers from teaching effectively (Skiba & Rausch, 2006). Behavior problems predict low academic achievement, school dropout, and drug abuse in adolescence. Without appropriate intervention, these behaviors often escalate and could ultimately lead to placement in special education or exclusion from school (Walker et al., 2000). This is problematic because special education is linked to negative outcomes for students and exclusionary discipline practices and special education evaluations disproportionately impact students from marginalized underserved groups (Bradley et al., 2008; Skiba, 2002). 

Many of these problems can be mitigated through a continuum of tiered interventions that target problem behaviors in the classroom and are minimally time and resource intensive (i.e., tier II; Sugai, 2009). This tiered system of intervention allows for implementation of early interventions without a significant disruption to student or educator routines. One class of interventions frequently employed at Tier II, are broadly labeled self-monitoring interventions. 

Generally, self-monitoring interventions are characterized by explicitly teaching student self-management strategies such as how to monitor (i.e., observe and record their behavior) and evaluate (i.e., compare their behavior rating to an external standard), combined with reinforcement (Shapiro & Cole, 1994). Previous research has demonstrated the effectiveness of self-management interventions in increasing on-task behavior (e.g., DiGangi et al., 1991; Harris et al., 2005; Mathes & Bender, 1997) and academic productivity and accuracy (e.g., Harris et al., 2005; Shimabukuro et al., 1999), as well as decreasing disruptive behavior (e.g., Hoff & DuPaul, 1998; Koegel et al., 1992).

     Unfortunately, the progress monitoring of these interventions, while practical, may lack the psychometric evidence necessary to support their use for data-based decision making. However, there is a robust evidence-base supporting the flexibility, usability, and defensibility, of the DBR Single Item Scale (DBR-SIS) making it well-suited to serve as an intervention mechanism especially for those incorporating a self-monitoring component like the present study (Chafouleas, 2011). The DBR-SM intervention was developed as a self-monitoring intervention using the DBR-SIS assessment methodology and formatting. 

     To this end, the Direct Behavior Rating-Self-Monitoring (DBR-SM) was developed to serve as a minimally a self-monitoring intervention using the DBR assessment methodology and DBR-SIS formatting. DBR-SM combines self-monitoring and performance feedback intervention mechanisms using the DBR-SIS assessment methodology and formatting. The use of a structured, standardized assessment tool (e.g., operationally defined behaviors, item scoring, standardization) as the base for a self monitoring intervention that includes a reliability check (i.e., performance feedback) appears advantageous. Continuity in assessment and intervention in a single format (i.e., form, target behaviors) across the course of intervention process (i.e., screening, baseline, progress monitoring, maintenance) represents the opportunity for time and inference savings and improved defensibility. Furthermore, given the psychometric support for DBR-SIS, the DBR-SM offers users a level of defensibility not shared by other self-monitoring interventions. 

More Information

Continued Validation and Refinement of the User Rating Profile - Web Resource (URP-WR)

Evaluating Consultation-Derived Intervention Quality: The Brief Measure of Intervention Quality (BMI

Continued Validation and Refinement of the User Rating Profile - Web Resource (URP-WR)

The development of the URP-WR is  grounded in the understanding that consumer subjective definition of a usable resources was likely, typically skewed towards accessibility. Usability more formally, refers to “the extent to which a system, product, or service can be used by the specified users to achieve specific goals with effectiveness, efficiency, and satisfaction in a specified context of use” (International Organization for Standardization, 2018). In other words, high usability means that the school psychologist is able to easily use and apply the recommendations provided in the resource to their setting with confidence. Low usability means that the school psychologist is unable to easily use the web resource, whether this be due to low quality recommendations, low quality evidence to support these recommendations, or the inaccessibility of the resource itself on the internet (i.e., one has to scroll through five pages of hits in order to find the resource). 

Initial development and validation efforts included an initial content validation procedure following by exploratory factor analytic and item reduction procedures using pilot study data. Analytic procedures resulted in 31 Likert scale (i.e., 0 to 6) items, across four domains, including accessibility, appearance, plausibility (which includes items relating to the feasibility of the recommendations as well as the credibility of the research supporting them), and system support (referring to the amount of support needed from a school or other system in order to use the resource effectively). The URP-WR exceeds the utility of other similar measures currently available (see Lydia M. Olsen Library, 2018; Schrock, 2019) as itallows users to engage in better digital citizenship practices, or more objectively and quantifiably evaluate and compare web-based EBP/I resources. Additionally, over time, accumulation and aggregation of consumer evaluations of web-based resources using the URP-WR would allow for new consumers an additional data source to use when evaluating web-based resources. In short, URP-WR scores could serve as a more objective version of a Yelp star system.

Overall, the URP-WR is a promising tool in development that aims to help school psychologists and educators make well-informed decisions regarding web resource selections in schools. The URP-WR seeks to (a) lessen the burden on practitioners by providing a tool to guide web resource selection and (b) narrow the research to practice gap through promotion and selection of evidence-based resources.  

Find out more

Supporting Student with ADHD in Schools

Diagnosis and Symptoms

Initial training content focuses on raising awareness of ADHD symptomology, with a particular focus on the ways that inattentive, hyperactive, and impulsive symptoms manifest in school. Specific deficits related to ADHD including those across executive functioning, social functioning, distorted self-perception, internalizing and externalizing behavior, and comorbidities are discussed. Additionally, this section of the training disseminates knowledge about ADHD as it relates to gender, culture, socioeconomic status, genetic and environmental risk factors. Finally, this section includes content addressing common misconceptions about the causes of ADHD (e.g. diet, lead, vaccines).

Assessment Tools

  In the next portion of the training, the utility of selected ratings scales, computerized cognitive assessments, and semi-structured interviews for assessing ADHD are reviewed. Trainees will experience a simulation of a computerized cognitive assessment and will be able to consider specific interview questions. 

Treatment Options

  

The training concludes by presenting a variety of evidence-based treatment options recommended to support students with ADHD along with closely related considerations. After identifying current issues with treatment (e.g. retention), the discussion tackles the controversial issue of stimulant medication and introduces parent management training and other community-based interventions for educators to be aware of. Lastly, trainees are provided with evidence-based practices to be used in the classroom and outlines school-based interventions (e.g. contingency management, peer mentor programs).

To access the complete webinar, email:

emont062@ucr.edu

School Service Provision Research Collaborative

c/o Dr. Wesley Sims / 900 University Ave. / 1207 Sproul Hall / Riverside, CA 92521

951-827-5582

Copyright © 2022 ssprc.org - All Rights Reserved.