Precision teaching (PT), an extension of Skinner’s work, brought standardization into data analysis (Lindsley, 1991). The standard celeration chart (SCC) allowed for this standardization given its axes and its inability to be stretched to be filled like its counterparts. One could monitor global behavioral influences, such as the coronavirus (COVID-19), using the SCC (Harder, 2019). Further, one could also monitor increases in learner performance, such as frequency building. Frequency building is a particular intervention involving repeated, timed, often sprinted practice with immediate feedback regarding performance (i.e., a timing; Dembek & Kubina, 2018). As such, precision teachers, when using frequency building, would shape successive increases in frequency (i.e., rate) to optimum levels of responding (i.e., frequency aim) in order to establish fluent performance. Once performance met fluent levels, it could be maintained over a period without practice, it endured for a period longer compared to training, it was stable in the face of distraction, it could be applied to compound skills, and it could generalize to novel contexts (Johnson & Street, 2014). Fluency is PT’s mastery criterion. As such, priorities of precision teachers consist of standardization and shaping.

Shaping historically involves artistic elements that lack standardization, which could function as a possible barrier to replication (Galbicka, 1994). Peterson depicts this art form beautifully in a story of Skinner hand shaping a pigeon to “bowl” in his lab at the top floor of a flour mill in 1943 (Peterson, 2004, p. 317). Skinner and his colleagues reinforced each successive approximation via hand shaping. It started with a simple head movement made in the direction of the ball to a swipe of the head to the ball. This artistic element has lingered in shaping research from that time on. Authors leave the criteria for withholding or delivering reinforcers and punishers throughout the shaping cycle unspecified (e.g., Ferguson & Rosales-Ruiz, 2001; Hoffman & Fleshler, 1959; Schaefer, 1970), possibly due to this influence.

Standardization of shaping entered the applied scene when Galbicka (1994) proposed a method to eliminate subjectivity and potentially reduce the need of history for a well-skilled shaper, whether shaping topography (e.g., a horse moving to a particular location; Ferguson & Rosales-Ruiz, 2001) or a dimension of that behavior (e.g., duration, frequency, latency). This systematic process was a new reinforcement schedule called the percentile schedule of reinforcement. It used the formula k = (m + 1)(1 − w), where m equals the number of past observations of the targeted behavior, w equals the density of reinforcement selected by the precision teacher, and k equals the rank of m responses that is needed to be surpassed to qualify for reinforcement.

The percentile schedule of reinforcement met the four requirements for shaping: (a) criteria are based on the current behavior of the learner, (b) the criterion results in a proportional amount of behavior that accesses reinforcement, (c) reinforcement is consistent and intermittent, and (d) there is a discrete definition of a terminal response (Galbicka, 1994). It met these requirements via the variables of the formula: m accounts for the current behavior, w accounts for the proportion of behavior accessing reinforcement, k serves as the terminal response, and when these are used in combination, the formula produces consistent, intermittent reinforcement.

This approach specifically brought clarity to the last requirement: a terminal response. Often, researchers and clinicians use accuracy or topography to define the terminal response (e.g., Conyers et al., 2004), which often leads to subjectivity. However, the k in the formula gave an objective and clear indication as to the terminal response without a subjective description of just “better.”

The discrete definition of a terminal response also uniquely aligns well with PT. As previously mentioned, frequency aims often guide clinicians toward optimal levels of responding that have previously been associated with fluency (Binder, 1996). Their use of frequency ranges to specify the aim (e.g., 80–100 words/min) leaves little to no room for ambiguity, thus potentially eliminating the barrier to replication as criticized by Galbicka (1994).

Percentile Schedule Requirements

Although the percentile schedule brings systemization and objectivity back into the science of shaping, one must adhere to a few requirements when using it. First, the shaper must quantify the behavior in a way that results in an ordinal ranking of a specific dimension of behavior (e.g., frequency, latency, duration). The shaper then arranges these rankings in an ascending or descending fashion based on the objective of the intervention (i.e., ascending for increasing behavior, descending for decreasing behavior). For example, if he or she wants to increase reading speed, the shaper would rank and order each timing from lowest frequency to highest frequency (within the observation window of m). If the shaper wants to decrease latency to pick up toys, the arrangement would go from longest latency to shortest latency.

PT’s use of frequencies within timings makes this requirement quite easy to meet. Visual displays of each timing on the SCC result in a quick ordering from lowest frequency to highest frequency by allowing for the evaluation of the frequency dots visually on the chart, even with data not already neatly arranged in this manner.

The second requirement involves establishing the observation window (m) and the theoretical density of reinforcement (w). Understanding the learner’s history and the various nuances of his or her needs aids in assigning a value to each variable. A few studies examined optimal values for these two variables. Athens, Vollmer, and St. Peter Pipkin (2007) measured the duration of time on task and compared three m values (5, 10, and 20). The m = 20 condition produced and maintained longer durations of on-task behavior. Lamb, Morral, Kirby, Iguchi, and Galbicka (2004) shaped smoking cessation using percentile schedules while evaluating the various densities of reinforcement (w). Participants in the w = .70 group showed a greater drop in breath carbon monoxide levels compared to the w = .10, w = .30, and w = .50 groups.

The following example will be used to illustrate the task of integrating the observation window and density of reinforcement selections into the formula. Assume a shaper selected an observation window (m) of 10. This means that the calculation of m includes the past 10 frequencies of the targeted skill. If the shaper prefers a dense schedule of reinforcement for a learner/skill, he or she could select 70% (w = .70), indicating that 70% of future timings would access a reinforcer. This creates a theoretical density of reinforcement because reinforcer delivery after the learner behaves may not necessarily result in precisely 70%; it could be more or less. When these values (10, .70) are entered into the formula, the result is k = (10 + 1)(1 − .70) = 3.3. Therefore, k = 3.3, which rounds to 3 due to the impossibility of emitting fractions of behavior. In other words, the learner needed to beat the third ranked timing to access the reinforcer.

Similar to the shorthand of referring to a “fixed ratio four” as an FR4, the percentile schedule may use a similar shorthand. In other words, a percentile schedule of reinforcement with 70% density and an observation window of 10 equals “K3.” Saying “K3” produces quick and clear language to convey to another precision teacher or researcher that the delivery of a reinforcer must occur only when it exceeds the third ranked frequency. As such, the current author suggests that K-schedules be the shorthanded term for percentile schedules of reinforcement, and will use that term throughout the rest of this article. For precision, it is also best to report m when using the shorthand Kn terminology, where n is the nth timing. A K9 when m = 10 would look a lot different from a K9 when m = 30.

Shaping Studies Using K-Schedules

Researchers have successfully used K-schedules in shaping duration, rate, and variability in humans and animals. Galbicka, Kautz, and Jagers (1993) shaped runs of responses with rats using a percentile schedule (m = 24, w = .33, k = 16.08). With a K16, the rat’s next run would need to exceed the 16th ranked run of the previous 24 runs (m = 24). The researchers not only increased run rates to 12 responses per trial but also held that rate steady for 60 consecutive sessions.

In one of the first research studies on K-schedules with humans, Lamb et al. (2004) examined various densities of reinforcement (w = .10, .30, .50, and .70) with 82 cigarette smokers. Criteria for reinforcement were contingent on decreases in CO levels. Results indicated that a significant drop in CO levels occurred for all participants in each group, with the greatest drop for the group with the greatest density of reinforcement (w = .70).

Most recently, Clark, Schmidt, Mezhoudi, and Kahng (2016) used a K5 (m = 10) schedule to increase accuracy and fluency with a 14-year-old boy diagnosed with pervasive developmental disorder and mild intellectual disability. The researchers measured the latency to start three tasks: adding two-digit numbers without regrouping, writing the punctuation mark at the end of a sentence after hearing the sentence, and copying a 10-character sentence. They failed, however, to describe the modality of solving the addition facts (answered vocally or textually). Overall, latency decreased for all three skills, with significant decreases observed for the addition and punctuation skills. The latency to begin sentence copying decreased about 10 s.

Although they were successful in using the K-schedule, Clark et al. (2016) did not functionally measure fluency. First, they failed to objectively define fluency, let alone define it in terms of count and time (Binder, 1996). Second, the authors measured the time to begin a task (i.e., latency) but not fluency. Although fluent behaviors can include a latency dimension, the interresponse time between skills calculates the fluency in addition to various performance outcome probes. Preference for each targeted skill and component skill levels (e.g., prerequisite skills like handwriting, solving one-digit math facts) are two factors that were not examined but that could have contributed to latency values.

Shaping in PT

Although shaping occurs often in PT research and practice, authors rarely specify the role of reinforcement in published studies. When researchers and clinicians specifically reinforce increases in frequency, two common schedules appear: personal best (e.g., Milyko, Berens, & Ghezzi, 2012) and celeration aim lines (also known as celeration goals; Kubina, 2019).

Personal best requires little skill to execute. The precision teacher delivers the reinforcer when the learner meets a frequency on a program that exceeds all others (i.e., the best score, or the highest frequency for accelerating programs) in the history of that skill. This schedule results in a very lean schedule of reinforcement where clinicians often insert various rules in addition to “personal best” to increase the density of reinforcement. Using K-schedule nomenclature, the personal best schedule equates to k = m (Km), where every past timing has to be exceeded for reinforcement delivery.

Using celeration aim lines is an even more rigorous schedule than personal best. In this reinforcement schedule, the learner not only needs to exceed his or her personal best but also likely needs to emit multiple responses more than the “best” for his or her frequency to fall along the projected trend line and access the reinforcer. Using celeration aim lines involves drawing a projected celeration line from the learner’s initial frequency toward the goal (e.g., an intermittent weekly goal) or the frequency aim of the skill (Johnson & Layng, 1992). Celeration is defined as a trend line that calculates how quickly a behavior changes over time (Kubina, 2019). For example, a celeration of x2/week indicates a doubling in the skill’s frequency every week. A celeration of x1.5/week indicates a 50% growth across the week. If the precision teacher wants a 50% growth across weeks or a x1.5 celeration, he or she draws a x1.5 celeration line from the beginning frequency toward the frequency aim.

Figure 1 displays an initial frequency of 16/min corrects with a projected celeration aim line toward the yellow aim band denoting the frequency aim at 60/min. At a growth of 50%, it is predicted that the learner would meet the aim of 60/min in 3.5 weeks. If the learner consistently failed to meet the projected celeration line, the precision teacher would redraw the celeration aim line starting at the current day’s frequency, likely with supplemental interventions (e.g., priming, prompting). Using K-schedule nomenclature, the personal best schedule would equal a k > m (Km+), where reinforcement is contingent on frequencies that meet or exceed the projected celeration aim line.

Fig. 1
figure 1

An example of drawing a celeration aim line from an initial timing

Although clinical and empirical data indicate success at shaping high-rate responding or fluent behaviors using either personal best or celeration aim lines (e.g., Johnson & Layng, 1992; Kubina, 2019; Milyko et al., 2012), these schedules may not be appropriate for all learners given their thin density. As such, the utility of the K-schedule seems preferential for those learners who require more frequent reinforcers. Further, K-schedules allow for a bit of “forgiveness” on days when the learner may not perform at his or her best (e.g., just had a fight with a friend, starting to get sick, lack of sleep). The denser the K-schedule, the more variable the learner may respond and still receive access to reinforcers, thus creating a more enriched environment.

Another benefit of the K-schedule includes possible expedited training for the shapers. The shapers only need to know how to count. It is possible for a very new shaper with no experience to produce the same results as a shaper with 10 years of experience because they both follow the Kn parameters: Simply count the last m timings and deliver a reinforcer if it beats the nth timing. The systematic nature of the K-schedule makes this possible. The shaper does not need to learn exceptions to the rule to accommodate the lean schedule. Further, the shaper does not need to redraw celeration aim lines for when the learner’s behavior consistently fails to meet the goal. The K-schedule formula adjusts for such variability, nuances in behavior, and environmental conditions.

Procedure

General K Criterion

When first using the K-schedule with a learner, select a K5 if the learner demonstrates previous success with a delay to accessing reinforcers. Research suggests that a denser schedule of reinforcement is optimal (Lamb et al., 2004), although others have found success in a K5 schedule as well (Clark et al., 2016). Selecting a K5 preserves resources, reduces reinforcement consumption time, and may function as a balance between the benefits of a high density of reinforcement and more closely replicating conditions of the natural environment. However, with learners new to intermittent reinforcement, a smaller k selection may be best (K2 fading to K3). The example in the two sections that follow will use a K5 for clarity, meaning that the criterion for reinforcement is calculated from 50% (w) of the most recent 10 frequencies (m).

Denoting the K-schedule on all charts in some fashion aids in clarity, especially for those planning on fading out the density of reinforcement by moving through larger k values. Figure 2 denotes the K-schedule at the bottom of the chart in the chart labels. If altering K-schedules, the precision teacher would use phase-change lines to denote the change. The k value describes the general criterion, indicating that for a timing to access a reinforcer, it must surpass the k value. Therefore, a K5 must surpass 5 of the last 10 (m) timings.

Fig. 2
figure 2

Anonymous, real data to show K-schedule notations on a timings-per-minute SCC

Specific K Criterion

There are two instances in which a precision teacher would introduce a K-schedule: (a) a skill is currently in data collection or training with sufficient data (e.g., at least 10 timings), or (b) a skill is new or currently without sufficient data to meet the requirements (e.g., less than 10 timings). The sections that follow detail the steps one can take in these two scenarios.

Skills with Previous Data

When implementing the K-schedule, the precision teacher could start with skills with previous data. This method is the least complicated. First, the teacher identifies the last 10 frequencies of the skill targeted for acquisition. For example, if he or she is looking to increase the frequency of words read, the precision teacher counts back 10 timings of correct words read. From this set of 10, the precision teacher would count to 5 (given the use of a K5) from lowest to highest and stop at the fifth timing. Therefore, the subsequent timing needs to exceed the fifth ranked timing to result in a reinforcer. That timing value sets the specific criterion; all other timing values are superfluous, and there is no need to count past the fifth timing.

Figure 3 highlights 10 timings in a box. As a reminder, the precision teacher can stop at the fifth timing. When the precision teacher counts from lowest to highest, the following sequence emerges: 8, 10, 10, 12, and 12. The fifth timing, 12, sets the criterion. In other words, the subsequent timing needs to exceed this count to qualify for a reinforcer. Therefore, if the subsequent timing resulted in a count of 13 or more in 15 s, the learner would receive a reinforcer. If the learner emitted 12 responses or fewer in the timing, the reinforcer would be withheld.

Fig. 3
figure 3

Same as Fig. 2, with highlighting to show the m = 10 timings to establish a specific criterion

Figure 4 highlights the timing immediately after the m observation window: 16 words in 15 s. This timing surpassed the specific criterion of 12 in 15 s, and, therefore, the precision teacher delivered a reinforcer. When using a paper chart, writing a “+” above the frequency dot indicates reinforcer delivery. This designation aids in determining treatment integrity across instructors. Supervisors can evaluate, based on a permanent product, whether the precision teacher adhered to the reinforcer protocol based on the “+” associated with the timing. The written count above the frequency dot also helps to quickly identify lowest to highest frequencies to set the specific k criterion within the timing. In other words, if the timing length is anything other than 1 min, the precision teacher can quickly calculate the number of behaviors the learner needs to emit to access the reinforcer for the next timing.

Fig. 4
figure 4

Similar to Fig. 2, with highlighting to show the “recent” timing that met the specific criteria

In reference to this specific example, the learner now has 10 new most recent timings (see Fig. 5). The observation window now excludes the first timing (12 count) and includes the most recent timing (16 count), resulting in a new low to high sequence: 8, 10, 10, 12, 13, 13, 15, 16, 20, and 22. The fifth rank timing still sets the criterion because of the 5 in the K5 schedule. Yet the specific criterion has changed to a 15-s timing count of 13 because 13 now sits in the fifth rank. Depending on the learner’s data, the specific criterion may stay the same across timings or it may change. However, the criterion is always recalculated after every timing. Although this occurs within the session, it takes no more time than the time to count.

Fig. 5
figure 5

Similar to Fig. 4, but shifting the m = 10 window to include the most recent timing. Therefore, the old timing is removed from the window to allow the new timing to be included in the calculation of the specific criterion

New Skills

If introducing the K-schedule with a skill without 10 timings of previous data, the precision teacher would follow the protocol discussed here. Again, this is only because the example is when m = 10. This would apply for any m value selected by the precision teacher. This protocol differs only with the initial timings to establish the m observation window. It follows the same steps as the previous protocol after the learner emits 10 timings.

If starting with a very new skill, simply reinforce the first timing. This helps establish a contingency of reinforcement between the response and consequence. Figure 2 shows the first timing with a count of 12 and a “+” indicating reinforcer delivery (i.e., criterion met).

The subsequent timings (Timings 2–9) alternate between a density percentage of 50% and 67%, given that fractions of behaviors are impossible (i.e., odd-numbered timings). Because no precedent currently exists, the present author selected to err on the side of being gracious to the learner with greater densities of reinforcement. A detailed description for Timings 2–9 is as follows. For the second timing, the frequency must surpass the first timing’s frequency for the learner to access the reinforcer. Figure 2 shows that the second timing failed to meet the criterion, and the reinforcer was withheld because there is no “+” above the 8. The third timing’s frequency must surpass one of the previous two (sticking with a w of 50%) for a reinforcer to be delivered. This procedure repeats for Timings 5, 7, and 9 because the previous timings are all even numbers, making a 50% density possible. The fourth timing’s frequency must surpass one-third of the previous frequencies for a reinforcer to be delivered (i.e., 67% density because 50% is not possible). This procedure repeats for Timings 6 (surpass two out of five timings; 60% density), 8 (surpass three out of seven timings; 57% density), and 10 (surpass four out of nine timings; 56% density). Once the skill has a history of the selected m timings—in this example, m = 10—then it may proceed in the same fashion as described in the “Skills With Previous Data” section.

Latency and Duration Measures

Time measures, such as latency and duration, similarly follow the previously mentioned protocol. The following example pertains to a learner visually tracking a reinforcing object with the goal to look at the object as it moves for 1 min. The learner looks at the reinforcing object for 2, 5, 4, 8, 15, 10, 11, 16, 18, and 20 s (see Fig. 6). The precision teacher then arranges these times into ascending order to match the goal of the program: 2, 4, 5, 6, 10, 11, 15, 16, 18, and 20. For the sake of showing a different w value, the learner in this example requires a denser schedule of reinforcement (70%). Therefore, the clinician selected a K3 where, out of the last 10 timings, the learner’s next timing’s duration must exceed only three. The formula used for this is as follows: k = (m + 1)(1 − w), where k = (10 + 1)(1 − 0.70) = (11)(.30) = 3.3 = K3. Knowing the 3 of the K-schedule, the precision teacher only needs to count up to the third timing (e.g., “2 . . . 4 . . . 5”) and stop because the 5-s duration sets the specific criterion.

Fig. 6
figure 6

An example of a K-schedule with duration timings

When applying the K3, the learner must surpass a duration of 5 s as that is the third ranked duration in the ascending sequence. Therefore, if the learner visually tracks for 5 s or less, they will not gain access to the reinforcer. If the learner visually tracks for 6 s or more, they will gain access to the reinforcer promptly at the end of the timing. Figure 7 shows that the learner visually tracked the reinforcing object for 15 s and as such gained access to the reinforcer.

Fig. 7
figure 7

Similar to Fig. 6, but with an added new timing that met the criterion

Now, a new set of timings comprise the m window: 5, 4, 8, 15, 10, 11, 16, 18, 20, and 15 s. The most recent 15-s timing pushes the 2-s timing out of the observation window. When arranged in ascending order, a timing of 8 s becomes the new third rank duration. The learner’s specific criterion requires him or her to visually track for longer than 8 s to access the reinforcer. This procedure continues until the learner meets the mastery criterion of visually tracking for 1 min or until the clinician alters the K-schedule.

Order of Steps

When implementing this protocol with a learner, it is advantageous for the precision teacher to follow these steps in this specific order (again, variances to this sequence are made on an individual basis):

  1. 1.

    Indicate the starting point with the stimuli (if using worksheets). This can be done by drawing an arrow on the stimuli, pointing with your finger, and so on.

  2. 2.

    Determine k criteria. Do not tell the learner, as you want the reinforcement schedule to shape the behavior—goal setting would be an intervention.

  3. 3.

    The learner performs the timing.

  4. 4.

    Immediately (3–5 s) deliver the reinforcer (if k was met) or withhold the reinforcer (if k was not met).

  5. 5.

    Deliver behavior-specific praise/feedback.

  6. 6.

    Provide error correction in a fashion appropriate to the learner and skill.

  7. 7.

    Repeat Steps 1–6 for subsequent timings.

Tips

The previously listed procedure describes the general protocol for implementing K-schedules with an existing or new skill. What follow are some suggestions to make the shaping process even more successful.

Predicted Frequency Aim

The specific criterion should never exceed the frequency aim of the skill. For example, the specific criterion indicates 130/min for a reinforcer, but the aim of reading words in a list suggests a range of 80–120/min (Kubina & Yurich, 2012). Staying true to the origins of PT via frequency-building instruction, the goal is not to produce “as-fast-as-you-can-go” responding. Shaping quick, paced responding within a specific frequency range that historically predicted fluency will result in the greatest outcomes. Therefore, it is neither necessary nor productive to build responding beyond a frequency aim. As such, when the calculated frequency exceeds the frequency aim, use the frequency aim as the k criterion for a reinforcer.

When to Use Goal Setting

Adding a goal to the K-schedule makes it unclear whether the rule of the goal or the contingency of the schedule controls the shaping of responding (e.g., increased speed). Goal setting functions as a motivating operation (Mayer, Sulzer-Azaroff, & Wallace, 2014) with value-altering effects and behavior-altering effects (Hayes, Jacobs, & Lewon, 2020), potentially benefiting the learning experience. However, in a clinical context, the precision teacher first needs to evaluate the conditions under which goal setting benefits the learner specifically and individually. Experimentally, researchers have found goal setting to be generally beneficial (e.g., Locke, Shaw, Saari, & Latham, 1981). Clinically, goals have, at times, turned into fluency barriers that have hindered the learner’s paced responding. For example, learners may slow down in their pace as they approach the goal. As such, the standard operating procedure at the beginning of learners’ experience suggests shaping increases via contingency shaping with the K-schedule alone and introducing goal setting later if the precision teacher deems it as a good fit for a particular learner.

Experience aids in determining the efficacy of goal setting. The following questions may aid in making that determination. Is the learner motivated by goals? For example, does the learner play sports or video games? Does the learner tend to set his or her own goals when playing or in other areas in life? Does the learner like to see the dots on his or her chart go up and want to beat the previous dots? If so, goal setting would likely assist in the shaping process. Alternatively, when the precision teacher plays games with the learner, does the learner tend to slow down, get agitated, increase the pace of his or her breathing, and use more speech dysfluencies (e.g., stutters) or “anxiety-related behaviors” as his or her responding approaches the goal? If so, a goal would likely not assist in shaping increases in frequencies.

When using goal setting with a K-schedule, the author has distinguished between two types of goal setting: k goal setting and aim goal setting. For k goal setting, the precision teacher informs the learner of the response requirement to meet the criterion for a reinforcer with the program (e.g., verbally spoken, visually marked). For aim goal setting, the precision teacher informs the learner of the response requirement to meet the overall frequency aim for the program (e.g., verbally spoken, visually marked). During aim goal setting, the learner still receives reinforcers according to the regular K-schedule; however, the learner would just have access to the ultimate goal of mastery of the skill, the terminal criterion.

Phase Changes

The observation window resets when a phase change occurs, whether big (e.g., new slice or step) or small (e.g., moving to a random presentation of stimuli). In other words, the history of the skill is no longer included in the calculation (i.e., previous timings), and the precision teacher determines the criterion in a fashion similar to that of a new program. This allows for a neutral and accessible way to reach criterion. The criterion comes from behavior within the current condition and is not pulled from an unequal condition requiring ratio strain or an inflated schedule of reinforcement. This does not mean that the clinician should ignore the learner’s total history in making data-based decisions. Rather, it simply does not factor into the calculation of the K-schedule.

Mastery and Magnitude of the Reinforcer

Currently, no research exists evaluating the manipulation of the reinforcer magnitude using the percentile schedule. This does not mean, however, that it is without merits to differentially reinforce benchmarks toward mastery, as differential reinforcement has quite an extensive line of research. As previously mentioned, mastery in PT occurs when a skill meets particular frequency aims known to produce maintenance, endurance, stability, application, and adduction outcomes. It serves as the terminal criteria in the shaping process for precision teachers. The following suggested mastery criteria come from Berens, Boyce, Berens, Doney, and Kenzer (2003). The specific use of differential reinforcement in relation to published PT mastery criteria is detailed here next to each criterion. Adjusting the reinforcer magnitude differentially reinforces the behavior meeting various benchmarks of mastery:

  • Aim: A timing meets the frequency aim with near 100% accuracy; the reinforcer doubles.

  • Qualification for mastery: Two timings in a row meet the aim with near 100% accuracy, assuming a previous timing was close to the aim. This evaluates stability around the aim; the reinforcer triples.

  • Mastery: In a subsequent session, the first timing is at the aim with 100% accuracy. This functions as a slight retention check from the previous “qualification” session. From this mastery session, the retention schedule begins; the reinforcer quadruples.

Applying these rules to an example, let us imagine a learner works for points. For every timing that results in a reinforcer (i.e., a timing that exceeds the specific criterion), the learner earns 5 points. When the frequency meets the predicted frequency aim for the first time, the learner gets 10 points. When the learner qualifies for mastery, he or she gets 15 points. And finally, when the learner masters the program, 20 points are awarded. This approach applies to a variety of reinforcers, such as tokens and edibles, or even the duration of time via doubling, tripling, and quadrupling.

Interventions

Although it is an intervention itself, clinicians may need to alter K-schedules based on the m and w variables to promote greater success (i.e., steeper celerations, significant increases in level or what precision teachers call “jump-ups”; Kubina & Yurich, 2012) on the skill. What follow are a few interventions detailing how to alter the K-schedule values in various conditions.

Increasing the Density of Reinforcement

Increasing the density of reinforcement (w) can be successful with respect to a variety of different situations. It can be used with flat celeration lines (e.g., x1.0 where the trend is neither increasing nor decreasing), large bounce envelopes (i.e., the degree of variability >x2), or a worsening learning picture (e.g., decreases in corrects and increases in errors).

Decreasing the Density of Reinforcement

Similarly, thinning the density of reinforcement may produce turn-ups in celeration (i.e., a more increasing trend in the celeration line; Kubina & Yurich, 2012) or jump-ups in frequency. This occurs when (a) the actual reinforcement greatly exceeds theoretical reinforcement and (b) a need arises to fade out the use of external reinforcement.

Although the precision teacher may set the theoretical reinforcer density (w) at 50%, past data may reflect that the actual density is closer to 80% (this analysis is possible because of the “+” written above the data point). This situation suggests that the learner could still produce high rates of responding and steep celeration lines, even when the reinforcer is faded. This fading helps create a learning environment more similar to that of a classroom. When the need for a programmed, external reinforcement schedule is faded out, naturally occurring contingencies of reinforcement in the classroom/home environment may start to control behavior instead of the programmed interventions and external reinforcers.

Manipulating the Observational Window (m)

Both of the previous examples manipulate the density of reinforcement (w). Clinically, the present author has not experimented with changing the observation window. Research indicates that little improvements result from using a small window (e.g., 5 previous timings; Athens et al., 2007). As such, the optimal choice is to increase to the window (e.g., 20 previous timings). A longer observation window would take longer to calculate, as it would require more counting on the part of the precision teacher. Further, a “10 timings” window crosses about 2–3 days’ worth of practice if learners engage in between three to five timings per day, per skill. An observation window of 20 can span across multiple timings charts, creating additional logistical hurdles to scanning and counting. Further, whereas past research suggests an m of 20 produces better outcomes (e.g., longer durations) than an m of 10 (Athens et al., 2007), this study only evaluated duration, not frequency. As such, more research is needed to evaluate manipulations of the m variable while shaping frequency and other dimensions of behavior.

Discussion

For lifelong shapers, the K-schedule may seem removed from the artistic nature of hand shaping that was done so elegantly by Skinner (Peterson, 2004). However, to become as good as prominent shapers, any new shaper requires an extended history of reinforcement and experience. This is notable with any clinician who has trained new shapers. Further, teaching the shaping process often includes more exceptions to the rule than just following a set of rules. The expert shaper’s behavior comes under the control of a rich history of contingencies and nuances that do not adhere to a rigid rule. As such, newer practitioners with less experience with shaping face a steep learning curve to shape behavior, in comparison to those with many years of experience. The K-schedule, because it relies on a simple formula, adjusts for the flexibility and nuances of the learner’s own behavior. Exceptions to the rule are not necessary. Further, the practitioner only needs to know how to count up (or down) and discriminate between more and less. Simplicity surfaces in this model with the formula broken down into Kn nomenclature.

Shaping with K-schedules may not only result in less time spent in training to develop into a successful shaper but also may improve treatment integrity and successful replication due to clear and consistent rules of when to deliver reinforcement (Galbicka, 1994). The benefit lies in the clear, objective, and systematic nature of the schedule.

Further, the K-schedule adjusts seamlessly when the clinician wants to increase or decrease the density of reinforcement. The process remains the same on the part of the clinician delivering the reinforcer. As such, the only slight modification needed to transition between a dense and lean schedule involves setting the general criterion. No additional training is required to teach a new method of calculating the reinforcer requirement.

The ordinal values also lend themselves nicely to fading out the density of reinforcement in order to more closely mimic reinforcing conditions in the natural environment. For example, the learner may start on a K2 (80% density), and then can fade to K5 (50% density), K7 (30% density), and finally K9 (10% density). From there, the clinician may even extend the observation window, which can make the schedule even leaner. These conditions may appear more representative of classroom reinforcement schedules or home reinforcement schedules, yet the clinician’s process in calculating and delivering remains the same. As such, it is parsimonious.

In order for behavior analysis to justify its status as a natural science, continuing to adopt parsimonious, systematic, and objective techniques is of paramount importance. PT’s use of standardization in measurement and discrete terminal mastery criteria makes the adoption of K-schedules logical. K-schedules’ ease of implementation and sensitivity to the learner would make this adoption extremely beneficial.