Despite increased scholarly attention on classroom management, classroom management assessment, and consultation-based professional development activities, the availability of feasible, flexible and defensible classroom management assessments remains limited (Reddy, Fabiano, & Jimerson, 2013). Given the reported deficits in pre- and in-service professional development in conjunction with a growing emphasis on prevention within a tiered service delivery approach, coaches, consultants, and trainers are increasingly being called upon to support improvement in educator classroom management practice. Use of performance feedback and coaching incorporating screening and formative assessment data imbedded within a collaborative consultation framework attempt to shift professional development goals away from awareness-raising to behavior change (Mitchell et al., 2017; Reinke et al., 2008; Simonsen et al., 2017). The DBR-CM was developed to address the limited availability of feasible classroom management assessments.
PST 1-2-3 provides explicit yet flexible guidance to educators as they engage in team-based problem-solving and data-based decision-making activities to address challenges facing students, staff, classrooms, buildings, and districts within tiered service delivery models. PST 1-2-3 draws from core problem-solving components, behavioral consultation processes, and practical, real-world experiences to address frequent barriers to ineffectiveness and inefficient SB PST. PST 1-2-3 uses structured agendas, repetition and consistency, time limits, and deemphasis of prior training or experience to address barriers to success such as resistance, inconsistency, poor training, and lack of knowledge or skills relative to intervention, supports, data-collection, or problem-solving. Since initial pilot implementation in 2008, PST 1-2-3 has been refined through incorporating additional research and theory, feedback from users, and outcomes for students. The effectiveness of PST 1-2-3 is rooted in its unique features. PST 1-2-3 requires little training for most participants, it uses a consistent 3-meeting cycle, and activities are driven by simple, structured, time-bound agendas.
School-Based Problem-Solving Teams (SB PST), in their varied forms and names, have played an important role in the progression of tiered service delivery models over time. Early forms of SB PST appeared as the first attempts to intervene in student difficulties outside of special education services (Burns & Symington, 2002). As their adoption and use have evolved, educators have relied heavily on these teams to execute components of a problem-solving process that underlie tiered service delivery models (e.g. RtI/MTSS). Despite widespread use of SB PST, the make up, processes, and outcomes of these teams has received relatively little scholarly attention (Burns & Symington, 2002; Hargreaves & Fullan, 2012; Rosenfield, Newell, Zwolski, & Benishek, 2018). The little evidence available suggests that in the absence of explicit guidance and supervision of university sanctioned research efforts, the effectiveness of SB PSTs drop dramatically (Burns, Peters, & Noell, 2008). In order to improve the systematic and successful implementation of SB PST, current practices must be assessed and areas of need identified. This study served as an initial step in exploring current problem-solving team practices from the perspective of key school personnel. Patterns in educator reports related to implementation, processes, make up, and outcomes will be presented along with reported barriers to successful problem-solving team implementation. More specifically, this study investigated reports of implementation characteristics including team make up, member roles, and procedures as well as of targeted outcomes of SB PST across a Southeastern state.
In response to the current national and global climate post the civil rights revolution in the wake of recurring brutality against Black persons and violence against Asian Americans, those belonging to socially stigmatized identities and their allies have repeatedly called for the dismantling of oppressive systems of prejudice and discrimination that perpetuate inequality, including those within schools (Gillborn, 2005). For instance, disproportionate disciplinary practices for Black students and lower academic expectations for Black and Latinx students have been well-documented (Okonofua et al., 2016; Skiba et al., 2016). Because racism is systemic, solutions must be multifaceted and system-wide. Unfortunately, even when individuals seek to promote such change, engaging in such activities may be beyond the knowledge and skills of most practitioners. In predominantly White school systems, those in positions of power purport to lack prejudice, but in fact have left their own prejudicial, ingrained belief systems unexamined and unchallenged (Dover et al., 2016). Whether unintentionally or through wilful ignorance, those in positions of power frequently wield their power to maintain these discriminatory systems. It is incumbent on those around the people perpetuating oppressive systems, especially other White staff, to challenge them, promoting change in the individual and in turn the system. Regretfully, individual change is notoriously difficult (Reinke et al., 2011) and changing deeply ingrained racist beliefs may be even more challenging (Welton et al., 2018).
Motivational Interviewing (MI) has been used to challenge ingrained beliefs and promote attitude and behavior change in individuals (Rollnick & Miller, 1995). MI has a well-established empirical track record of effecting enduring change across a variety of outcomes (e.g., substance abuse, health and hygiene, clean water use). Given individuals acting at the helm of racist institutions also have deeply ingrained, difficult to change attitudes, application of MI-based strategies to challenge these attitudes appears advantageous (Legault et al., 2011). Additionally, given the challenges associated with discussing racism, the non-directive, conversation-based techniques employed within an MI approach appears well-suited to effectively promote change in those in positions to challenge racist behaviors. While acknowledging that interpersonal strategies and systemic policy must work in tandem to produce truly equitable systems, this paper session presents the application of MI-based techniques to create belief and behavior change in predominantly White, oppression-maintaining persons in positions of power within school settings. Examples and prevalence rates of ways in which racism and systemic oppression manifests in schools will be presented. Next, attendees will learn the fundamental philosophies and skills of MI. At its core, and reflected in many of the principles, MI is committed to avoiding the righting reflex, or the tendency to tell individuals directly what, when, and how to change (Miller & Rollnick, 2011). Lastly, using a highly interactive format, presenters will help attendees apply MI techniques to guide difficult conversations with those in power that frequently profess egalitarian values and behaviors, while rationalizing their own microaggressions or other discriminatory practices (Sawyer & Ganpa, 2018). Using MI skills to adjust to avoiding the righting reflex in favor of open questions, affirmations, reflections, and summaries (OARS; Miller & Rollnick, 2011), may allow change agents to circumvent defensiveness, a typical feature of Whiteness within the context of race and racism (Major et al., 2018).
The NASP Practice Model identifies consultation as a professional practice domain for problem-solving and the delivery of evidence-based interventions (NASP, 2020). School psychologists frequently engage in consultation activities, collaborating with consultees (e.g., teachers) to define a socially significant problem, analyze the problem, develop a plan (i.e., an intervention) to address the problem, and establish a monitoring system to evaluate the intervention (Kratochwill et al., 2008). Although consultation continues to function as a major mechanism for delivering interventions to children and adolescents in school settings, there remains a lack of available tools for evaluating the quality of interventions selected or generated within the consultative process. The implications of this shortcoming extend to practice and intervention research where, without knowledge of intervention quality, faulty conclusions regarding the effectiveness of consultation may result. To address this problem, the Brief Measure of Intervention Quality (BMIQ) was developed for the purpose of evaluating the quality of consultation-derived interventions.
In the last decade, teachers have consistently reported student behaviors as a top concern in their classrooms, while also reporting deficits in their pre-service classroom and behavior management training (Reinke et al., 2011). A classroom environment characterized by frequent disruptions and inappropriate behaviors prevents students from learning and teachers from teaching effectively (Skiba & Rausch, 2006). Behavior problems predict low academic achievement, school dropout, and drug abuse in adolescence. Without appropriate intervention, these behaviors often escalate and could ultimately lead to placement in special education or exclusion from school (Walker et al., 2000). This is problematic because special education is linked to negative outcomes for students and exclusionary discipline practices and special education evaluations disproportionately impact students from marginalized underserved groups (Bradley et al., 2008; Skiba, 2002).
Many of these problems can be mitigated through a continuum of tiered interventions that target problem behaviors in the classroom and are minimally time and resource intensive (i.e., tier II; Sugai, 2009). This tiered system of intervention allows for implementation of early interventions without a significant disruption to student or educator routines. One class of interventions frequently employed at Tier II, are broadly labeled self-monitoring interventions.
Generally, self-monitoring interventions are characterized by explicitly teaching student self-management strategies such as how to monitor (i.e., observe and record their behavior) and evaluate (i.e., compare their behavior rating to an external standard), combined with reinforcement (Shapiro & Cole, 1994). Previous research has demonstrated the effectiveness of self-management interventions in increasing on-task behavior (e.g., DiGangi et al., 1991; Harris et al., 2005; Mathes & Bender, 1997) and academic productivity and accuracy (e.g., Harris et al., 2005; Shimabukuro et al., 1999), as well as decreasing disruptive behavior (e.g., Hoff & DuPaul, 1998; Koegel et al., 1992).
Unfortunately, the progress monitoring of these interventions, while practical, may lack the psychometric evidence necessary to support their use for data-based decision making. However, there is a robust evidence-base supporting the flexibility, usability, and defensibility, of the DBR Single Item Scale (DBR-SIS) making it well-suited to serve as an intervention mechanism especially for those incorporating a self-monitoring component like the present study (Chafouleas, 2011). The DBR-SM intervention was developed as a self-monitoring intervention using the DBR-SIS assessment methodology and formatting.
To this end, the Direct Behavior Rating-Self-Monitoring (DBR-SM) was developed to serve as a minimally a self-monitoring intervention using the DBR assessment methodology and DBR-SIS formatting. DBR-SM combines self-monitoring and performance feedback intervention mechanisms using the DBR-SIS assessment methodology and formatting. The use of a structured, standardized assessment tool (e.g., operationally defined behaviors, item scoring, standardization) as the base for a self monitoring intervention that includes a reliability check (i.e., performance feedback) appears advantageous. Continuity in assessment and intervention in a single format (i.e., form, target behaviors) across the course of intervention process (i.e., screening, baseline, progress monitoring, maintenance) represents the opportunity for time and inference savings and improved defensibility. Furthermore, given the psychometric support for DBR-SIS, the DBR-SM offers users a level of defensibility not shared by other self-monitoring interventions.
The development of the URP-WR is grounded in the understanding that consumer subjective definition of a usable resources was likely, typically skewed towards accessibility. Usability more formally, refers to “the extent to which a system, product, or service can be used by the specified users to achieve specific goals with effectiveness, efficiency, and satisfaction in a specified context of use” (International Organization for Standardization, 2018). In other words, high usability means that the school psychologist is able to easily use and apply the recommendations provided in the resource to their setting with confidence. Low usability means that the school psychologist is unable to easily use the web resource, whether this be due to low quality recommendations, low quality evidence to support these recommendations, or the inaccessibility of the resource itself on the internet (i.e., one has to scroll through five pages of hits in order to find the resource).
Initial development and validation efforts included an initial content validation procedure following by exploratory factor analytic and item reduction procedures using pilot study data. Analytic procedures resulted in 31 Likert scale (i.e., 0 to 6) items, across four domains, including accessibility, appearance, plausibility (which includes items relating to the feasibility of the recommendations as well as the credibility of the research supporting them), and system support (referring to the amount of support needed from a school or other system in order to use the resource effectively). The URP-WR exceeds the utility of other similar measures currently available (see Lydia M. Olsen Library, 2018; Schrock, 2019) as itallows users to engage in better digital citizenship practices, or more objectively and quantifiably evaluate and compare web-based EBP/I resources. Additionally, over time, accumulation and aggregation of consumer evaluations of web-based resources using the URP-WR would allow for new consumers an additional data source to use when evaluating web-based resources. In short, URP-WR scores could serve as a more objective version of a Yelp star system.
Overall, the URP-WR is a promising tool in development that aims to help school psychologists and educators make well-informed decisions regarding web resource selections in schools. The URP-WR seeks to (a) lessen the burden on practitioners by providing a tool to guide web resource selection and (b) narrow the research to practice gap through promotion and selection of evidence-based resources.
School Service Provision Research Collaborative
c/o Dr. Wesley Sims / 900 University Ave. / 1207 Sproul Hall / Riverside, CA 92521
Copyright © 2024 ssprc.org - All Rights Reserved.