Using jrt

This package provides user-friendly functions designed for the easy implementation of Item-Response Theory (IRT) models and scoring with judgment data. Although it can be used in a variety of contexts, the original motivation for implementation is to facilitate use for creativity researchers.

Disclaimer

jrt is not an estimation package, it provides wrapper functions that call estimation packages and extract/report/plot information from them. At this stage, jrt uses the (excellent) package mirt (Chalmers, 2012) as its only IRT engine. Thus, if you use jrt for your research, please ensure to cite mirt as the estimation package/engine:

We also encourage that you cite jrt – especially if you use the plots or the automatic model selection. Currently, this would be done with:

  • Myszkowski, N. (2021). Development of the R library “jrt”: Automated item response theory procedures for judgment data and their application with the consensual assessment technique. Psychology of Aesthetics, Creativity, and the Arts, 15(3), 426-438. http://dx.doi.org/10.1037/aca0000287

Ok now let’s get started…

What the data should look like

Then, a judgment data.frame would be provided to the function jrt. Here we’ll use the simulated one in jrt::ratings.

data <- jrt::ratings

It looks like this:

head(data)
#>   Judge_1 Judge_2 Judge_3 Judge_4 Judge_5 Judge_6
#> 1       5       4       3       4       4       4
#> 2       3       3       2       3       2       2
#> 3       3       3       3       3       3       2
#> 4       3       2       2       3       4       2
#> 5       2       3       1       2       2       1
#> 6       3       2       2       3       2       1

jrt is in development and these features will hopefully appear soon (check back !), but in this release:

  • Your data should be ordinal/polytomous exclusively (although the plotting functions also work with mirt fitted binary and nominal models)
  • Your data should be assumed unidimensional (one latent ability predicts the judgments)
  • Your judgments should be assumed conditionnally independent (the judgements are only related to one another because they are explained by the same latent)
  • Your data should not include impossible values (so check that first)
  • Your data only has 2 facets (e.g. products by judges or items, but not both)

I know, that’s a lot that you can’t do…but this covers the typical cases, at least for the Consensual Assessment Technique – which is why it was originally created.

Model fitting, scoring and statistics with jrt()

You will first want to first load the library.

library(jrt)
#> Loading required package: directlabels

The main function of the jrt package is jrt(). By default, this function will:

  • Fit the most common and available IRT models for ordinal data
  • Select automatically the best fitting model (based on an information criterion, by default the AIC corrected)
  • Report a lot of useful indices of reliability (from IRT, CTT and the inter-rater reliability literature) and plot the Judge Category Curves and Total Information Function plot (which shows the levels of θ at which the set of judgments is the most informative/reliable) – we’ll see how to customize them later!
  • Make the factor scores and standard errors readily accessible in the @factor.scores (or @output.data) slot of the jrt object.

Let’s do it!

  • Select, fit and return stats with information function. We’re storing the output in an object (fit) to do more after. Note: There’s a progress bar by default, but it takes space in the vignette, so I’ll remove it here with progress.bar = F.
fit <- jrt(data, progress.bar = F)
#> The possible responses detected are: 1-2-3-4-5
#> 
#> -== Model Selection (6 judges) ==-
#> AIC for Rating Scale Model: 4414.163 | Model weight: 0.000
#> AIC for Generalized Rating Scale Model: 4368.781 | Model weight: 0.000
#> AIC for Partial Credit Model: 4022.955 | Model weight: 0.000
#> AIC for Generalized Partial Credit Model: 4014.652 | Model weight: 0.000
#> AIC for Constrained Graded Rating Scale Model: 4399.791 | Model weight: 0.000
#> AIC for Graded Rating Scale Model: 4307.955 | Model weight: 0.000
#> AIC for Constrained Graded Response Model: 3999.248 | Model weight: 0.673
#> AIC for Graded Response Model: 4000.689 | Model weight: 0.327
#>  -> The best fitting model is the Constrained Graded Response Model.
#> 
#>  -== General Summary ==-
#> - 6 Judges
#> - 300 Products
#> - 5 response categories (1-2-3-4-5)
#> - Mean judgment = 2.977 | SD = 0.862
#> 
#> -== IRT Summary ==-
#> - Model: Constrained (equal slopes) Graded Response Model (Samejima, 1969) | doi: 10.1007/BF03372160
#> - Estimation package: mirt (Chalmers, 2012) | doi: 10.18637/jss.v048.i06
#> - Estimation algorithm: Expectation-Maximization (EM; Bock & Atkin, 1981) | doi: 10.1007/BF02293801
#> - Factor scoring method: Expected A Posteriori (EAP)
#> - AIC = 3999.248 | BIC = 4091.843 | SABIC = 4091.843 | HQ = 4036.305
#> 
#> -== Model-based reliability ==-
#> - Empirical reliability | Average in the sample: .893
#> - Expected reliability | Assumes a Normal(0,1) prior density: .894

Of course there’s more available here than one would report. If using IRT scoring (which is the main purpose of this package), we recommend reporting what IRT model was selected, along with IRT indices primarily, since the scoring is based on the estimation of the θ abilities. In this case typically what is reported in the empirical reliability (here 0.893), which is the estimate of the reliability of the observations in the sample. It can be interpreted similarily as other more traditionnal indices of reliability (like Cronbach’s α).

  • Doing the same thing without messages
fit <- jrt(data, silent = T)
  • Selecting the model a priori

One may of course select a model based on assumptions on the data rather than on model fit comparisons. This is done through using the name of a model as an imput of the argument irt.model of the jrt() function. This bypasses the automatic model selection stage.

fit <- jrt(data, "PCM")
#> The possible responses detected are: 1-2-3-4-5
#> 
#>  -== General Summary ==-
#> - 6 Judges
#> - 300 Products
#> - 5 response categories (1-2-3-4-5)
#> - Mean judgment = 2.977 | SD = 0.862
#> 
#> -== IRT Summary ==-
#> - Model: Partial Credit Model (Masters, 1982) | doi: 10.1007/BF02296272
#> - Estimation package: mirt (Chalmers, 2012) | doi: 10.18637/jss.v048.i06
#> - Estimation algorithm: Expectation-Maximization (EM; Bock & Atkin, 1981) | doi: 10.1007/BF02293801
#> - Factor scoring method: Expected A Posteriori (EAP)
#> - AIC = 4022.955 | BIC = 4115.549 | SABIC = 4115.549 | HQ = 4060.011
#> 
#> -== Model-based reliability ==-
#> - Empirical reliability | Average in the sample: .889
#> - Expected reliability | Assumes a Normal(0,1) prior density: .758

See the documentation for a list of available models. Most models are directly those of mirt. Others are versions of the Graded Response Model or Generalized Partial Credit Model that are constrained in various ways (equal discriminations and/or equal category structures) through the mirt.model() function of mirt.

Note that they can also be called by their full names (e.g. jrt(data, "Graded Response Model")).

  • Extract the factor scores with @factor.scores.
head(fit@factor.scores)
#>   Judgments.Factor.Score Judgments.Standard.Error Judgments.Mean.Score
#> 1              1.7082212                0.5825035             4.000000
#> 2             -0.7210879                0.5582415             2.500000
#> 3             -0.1523946                0.5119996             2.833333
#> 4             -0.4243494                0.5320399             2.666667
#> 5             -2.2559294                0.6721314             1.833333
#> 6             -1.4154467                0.6203573             2.166667

Note : If you want a more complete output with the original data, use @output.data. If there were missing data, @output.data also appends imputed data.

head(fit@output.data)
#>   Judge_1 Judge_2 Judge_3 Judge_4 Judge_5 Judge_6 Judgments.Factor.Score
#> 1       5       4       3       4       4       4              1.7082212
#> 2       3       3       2       3       2       2             -0.7210879
#> 3       3       3       3       3       3       2             -0.1523946
#> 4       3       2       2       3       4       2             -0.4243494
#> 5       2       3       1       2       2       1             -2.2559294
#> 6       3       2       2       3       2       1             -1.4154467
#>   Judgments.Standard.Error Judgments.Mean.Score
#> 1                0.5825035             4.000000
#> 2                0.5582415             2.500000
#> 3                0.5119996             2.833333
#> 4                0.5320399             2.666667
#> 5                0.6721314             1.833333
#> 6                0.6203573             2.166667

Judge Category Curves

Judge characteristics can be inspected with Judge Category Curve (JCC) plots. They are computed with the function jcc.plot().

A basic example for Judge 3…

jcc.plot(fit, judge = 3)

Now of course, there are many options, but a few things that you could try:

  • Plot the category curves of all judges by using judge = "all" or simply removing the judge argument (note that you can change the number of columns or rows, see the documentation for these advanced options).
jcc.plot(fit)

  • Plot the category curves of a vector of judges by providing a vector of judge numbers. For example here for judges 1 and 6.
jcc.plot(fit, judge = c(1,6))

  • Change the layout by providing a number of columns or rows desired (not both, they may conflict):
jcc.plot(fit, facet.cols = 2)

  • Plot the category curves in black and white with greyscale = TRUE (this uses linetypes instead of colors)…
jcc.plot(fit, 1, greyscale = T)

  • Adding the reliability overlay with overlay.reliability = TRUE (reliability is scaled from 0 to 1, making it easier to read with probabilities than information)
jcc.plot(fit, 1, overlay.reliability = TRUE)

  • Using a legend instead of labels on the curves with labelled = FALSE.
jcc.plot(fit, overlay.reliability = T, labelled = F)

  • Repositionning the legend.
jcc.plot(fit, overlay.reliability = T, labelled = F, legend.position = "bottom")

  • Changing the automatic naming of the judges/items with column.names.
jcc.plot(fit, 2, column.names = "Expert")

  • Overriding the automatic naming of judge/items (you must name all items/judges, not only the ones you are plotting !) and response categories (note: works with labels or legend). See here, we want to plot experts C and D (3rd and 4th column) but we still have to provide names for all.
jcc.plot(fit, 3:4,
         manual.facet.names = paste("Expert ", c("A", "B", "C", "D", "E", "F")),
         manual.line.names = c("Totally disagree", "Disagree", "Neither agree\nnor disagree", "Agree", "Totally agree"),
         labelled = F)

  • Remove/change the title of the plot with title
jcc.plot(fit, 1, title = "")

  • Change the x-axis θ limits with theta.span = 5 (sets the maximum, the minimum is automatically adjusted)
jcc.plot(fit, 1, theta.span = 5)

  • Change the transparency of the curves by passing a vector of alpha transparency values. Pretty cool for presentations. The vector should be of the length of the number of categories + 1 (for the reliability, even if you don’t plot it) – so in our case here that’s 6 values. Note that it doesn’t work well with the labelling of the curves yet.
jcc.plot(fit, 1:4,
         labelled = F,
         line.opacity = c(0,0,0,1,0,0) # Highlighting the 4th category
         )

  • Change the colors of the curves with color.palette (uses the RColorBrewer palettes in ggplot2), the background colors with theme (uses the ggplot2 themes, like bw, light, grey, etc.), and the line size with line.width.
jcc.plot(fit, 1, color.palette = "Dark2", theme = "classic", line.width = 1.5, font.family = "serif", overlay.reliability = T, name.for.reliability = "Reliability")

  • Highlight certain categories by supplying an alpha transparency vector (must be of length of the number of categories + 1 for the reliability function).
jcc.plot(fit, 1:3, labelled = F, line.opacity = c(0,0,0,1,0,0))

or

jcc.plot(fit, 1, color.palette = "Blues", theme = "grey", line.width = 3, labelled = F)

I’ve also integrated the colors of the ggsci package (npg, aaas, nejm, lancet, jama, jco, D3, locuszoom, igv, uchicago, startrek, tron, futurama), but be careful, not all may have sufficient color values!

jcc.plot(fit, 1, color.palette = "npg", overlay.reliability = T)

Information Plots

The jrt() function already plots an information plot, but information plots can be called (as well as variants of information, like standard error and reliability), with the info.plot() function.

  • An example for judge 1:
info.plot(fit, 1)

  • For the entire set of judges, just remove the judge argument.
info.plot(fit)

  • You can switch to standard errors or reliability with the type argument.

(type = "reliability" also works)

info.plot(fit, type = "r")

info.plot(fit, type = "se")

(type = "Standard Error" also works)

  • Coerce y-axis limits by passing a numeric vector with the minimum and maximum.
info.plot(fit, type = "r", y.limits = c(0,1))

  • Use y.line to add a horizontal line, for example for a .70 threshold, usual (though rarely used in IRT) for reliability.
info.plot(fit, type = "r", y.line = .70)

  • You can plot information with reliability (type = ir) or with standard error (type = ise).
info.plot(fit, type = "ise")

With a threshold value

info.plot(fit, type = "ir", y.line = .7)

And here again, themes are available.

info.plot(fit, type = "ir", y.line = .7, color.palette = "Dark2")

Similar customizing options than jcc.plot() are available, here is an example:

info.plot(fit, 1, "ir",
          column.names = "Rater",
          theta.span = 5,
          theme = "classic",
          line.width = 2,
          greyscale = T,
          font.family = "serif")

Dealing with unobserved categories and Rating Scale Models

Some polytomous IRT models (namely, the Rating Scale models) assume that judges all have the same response category structure, and so they cannot be estimated if all judges do not have the same observed categories. So, if your data includes judges with unobserved categories, how does jrt deal with that?

For the automatic model selection stage, jrt will by default keep all judges but, if there are judges with unobserved categories, it will not fit the Rating Scale and Generalized Rating Scale models. You will be notified in the output.

Note : The possible values are automatically detected, but it can be bypassed with the possible.values argument.

Here’s an example on a data set where a judge had unobserved categories. By default the set of candidate models will exclude rating scale models (note in the plot that the last judge has an uboserved category).

fit <- jrt(data, 
           progress.bar = F, #removing the progress bar for the example
           plots = F) 
#> The possible responses detected are: 1-2-3-4-5
#> 12.5% Judges (1 out of 8) did not have all categories (1-2-3-4-5 observed). Rating scale models were ignored. See documentation (argument remove.judges.with.unobserved.categories) for details.
#> 
#> -== Model Selection (8 judges) ==-
#> AIC for Graded Response Model: 1656.018 | Model weight: 0.546
#> AIC for Constrained Graded Response Model: 1656.393 | Model weight: 0.453
#> AIC for Partial Credit Model: 1678.702 | Model weight: 0.000
#> AIC for Generalized Partial Credit Model: 1668.746 | Model weight: 0.001
#>  -> The best fitting model is the Graded Response Model.
#> 
#>  -== General Summary ==-
#> - 8 Judges
#> - 100 Products
#> - 5 response categories (1-2-3-4-5)
#> - Mean judgment = 2.841 | SD = 0.785
#> 
#> -== IRT Summary ==-
#> - Model: Graded Response Model (Samejima, 1969) | doi: 10.1007/BF03372160
#> - Estimation package: mirt (Chalmers, 2012) | doi: 10.18637/jss.v048.i06
#> - Estimation algorithm: Expectation-Maximization (EM; Bock & Atkin, 1981) | doi: 10.1007/BF02293801
#> - Factor scoring method: Expected A Posteriori (EAP)
#> - AIC = 1656.018 | BIC = 1755.014 | SABIC = 1755.014 | HQ = 1696.084
#> 
#> -== Model-based reliability ==-
#> - Empirical reliability | Average in the sample: .921
#> - Expected reliability | Assumes a Normal(0,1) prior density: .919

Now, if you want instead to remove the incomplete judges to compare the models, set remove.judges.with.unobserved.categories = TRUE (it’s a long name for an argument, so if you have a better idea of a clear but shorter name shoot me an email!). Now all models will be compared, but with only the complete judges.

After this stage:

  • if the model selected is of the Rating Scale type, then only complete judges will be kept to fit the selected model
  • if another model is selected, then all judges will be used to fit the selected model (this is the case of the example below).

An example with the same data as above but with remove.judges.with.unobserved.categories = TRUE. Here, since the best fitting model was the Constrained Graded Response Model (not a Rating Scale Model), then the model is fit again with all judges (hence the different AIC between the two stages).

fit <- jrt(data, 
           remove.judges.with.unobserved.categories = T,
           progress.bar = F, #removing the progress bar for the example
           plots = F) 
#> The possible responses detected are: 1-2-3-4-5
#> 12.5% Judges (1 out of 8) did not have all categories (1-2-3-4-5 observed). Incomplete Judges were removed for model comparison, and in subsequent analyses if a rating scale model is selected. See documentation (argument remove.judges.with.unobserved.categories) for details.
#> 
#> -== Model Selection (7 judges) ==-
#> AIC for Rating Scale Model: 1723.348 | Model weight: 0.000
#> AIC for Generalized Rating Scale Model: 1706.738 | Model weight: 0.000
#> AIC for Partial Credit Model: 1574.999 | Model weight: 0.001
#> AIC for Generalized Partial Credit Model: 1579.209 | Model weight: 0.000
#> AIC for Constrained Graded Rating Scale Model: 1724.575 | Model weight: 0.000
#> AIC for Graded Rating Scale Model: 1701.954 | Model weight: 0.000
#> AIC for Constrained Graded Response Model: 1561.043 | Model weight: 0.945
#> AIC for Graded Response Model: 1566.783 | Model weight: 0.054
#>  -> The best fitting model is the Constrained Graded Response Model.
#> 
#>  -== General Summary ==-
#> - 8 Judges
#> - 100 Products
#> - 5 response categories (1-2-3-4-5)
#> - Mean judgment = 2.841 | SD = 0.785
#> 
#> -== IRT Summary ==-
#> - Model: Constrained (equal slopes) Graded Response Model (Samejima, 1969) | doi: 10.1007/BF03372160
#> - Estimation package: mirt (Chalmers, 2012) | doi: 10.18637/jss.v048.i06
#> - Estimation algorithm: Expectation-Maximization (EM; Bock & Atkin, 1981) | doi: 10.1007/BF02293801
#> - Factor scoring method: Expected A Posteriori (EAP)
#> - AIC = 1656.393 | BIC = 1737.154 | SABIC = 1737.154 | HQ = 1689.078
#> 
#> -== Model-based reliability ==-
#> - Empirical reliability | Average in the sample: .916
#> - Expected reliability | Assumes a Normal(0,1) prior density: .915

Getting additional statistics

Additionnal statistics may be computed with additional.stats = TRUE.

fit <- jrt(data,
           additional.stats = T,
           progress.bar = F,
           plots = F) #removing the progress bar for the example
#> The possible responses detected are: 1-2-3-4-5
#> 12.5% Judges (1 out of 8) did not have all categories (1-2-3-4-5 observed). Rating scale models were ignored. See documentation (argument remove.judges.with.unobserved.categories) for details.
#> 
#> -== Model Selection (8 judges) ==-
#> AIC for Graded Response Model: 1656.018 | Model weight: 0.546
#> AIC for Constrained Graded Response Model: 1656.393 | Model weight: 0.453
#> AIC for Partial Credit Model: 1678.702 | Model weight: 0.000
#> AIC for Generalized Partial Credit Model: 1668.746 | Model weight: 0.001
#>  -> The best fitting model is the Graded Response Model.
#> 
#>  -== General Summary ==-
#> - 8 Judges
#> - 100 Products
#> - 5 response categories (1-2-3-4-5)
#> - Mean judgment = 2.841 | SD = 0.785
#> 
#> -== IRT Summary ==-
#> - Model: Graded Response Model (Samejima, 1969) | doi: 10.1007/BF03372160
#> - Estimation package: mirt (Chalmers, 2012) | doi: 10.18637/jss.v048.i06
#> - Estimation algorithm: Expectation-Maximization (EM; Bock & Atkin, 1981) | doi: 10.1007/BF02293801
#> - Factor scoring method: Expected A Posteriori (EAP)
#> - AIC = 1656.018 | BIC = 1755.014 | SABIC = 1755.014 | HQ = 1696.084
#> 
#> -== Model-based reliability ==-
#> - Empirical reliability | Average in the sample: .921
#> - Expected reliability | Assumes a Normal(0,1) prior density: .919
#> -== Other reliability statistics (packages "irr" and "psych") ==-
#> - Cronbach's Alpha: .903
#> - Standardized Cronbach's Alpha : .913
#> - Guttman's Lambda 4 :.939
#> - Guttman's Lambda 6 :.908
#> - Fleiss' Kappa : .153
#> - Fleiss-Conger's Exact Kappa : .164
#> - Intraclass Correlation Coefficient (One-Way Consistency model): .495
#> - Intraclass Correlation Coefficient (Two-Way Consistency model): .538
#> - Intraclass Correlation Coefficient (One-Way Agreement model): .495
#> - Intraclass Correlation Coefficient (Two-Way Agreement model): .500

Using the fitted object

The fitted model is stored in the slot @mirt.object, so additionnal functions from mirt can be easily used.

For example:

# Get more fit indices and compare models
mirt::anova(fit@mirt.object, verbose = F)
#>                      AIC    SABIC       HQ      BIC   logLik
#> [email protected] 1656.018 1635.001 1696.084 1755.014 -790.009
# Get total information for a given vector of attributes
mirt::testinfo(fit@mirt.object, Theta = seq(from = -3, to = 3, by = 1))
#> [1]  2.602953  6.107328 13.251952 10.853827 13.552225  9.937194  4.407784
# Get the test information for case 1
mirt::testinfo(fit@mirt.object, Theta = fit@factor.scores.vector[1])
#> [1] 15.50897
# Get marginal reliability for high abilities – using a Normal(1,1) prior
mirt::marginal_rxx(fit@mirt.object,
                   density = function(x) {dnorm(x, mean = 1, sd = 1)})
#> [1] 0.9141302

Comparing two models with Likelihood Ratio Tests

For now, direct comparisons between two models are not directly implemented, but rather easy to do with mirt’s anova() function, applied on the @mirt.object from two fitted models.

model1 <- jrt(data, "GRM", silent = T) # Fitting a GRM
model2 <- jrt(data, "CGRM", silent = T) # Fitting a Constrained GRM
mirt::anova(model1@mirt.object, model2@mirt.object, verbose = F) #Comparing them
#>                         AIC    SABIC       HQ      BIC   logLik      X2 df   p
#> [email protected] 1656.018 1635.001 1696.084 1755.014 -790.009               
#> [email protected] 1656.393 1639.248 1689.078 1737.154 -797.197 -14.375 -7 NaN

Dealing with missing data

The ratings_missing data is a simulated dataset with a planned missingness design. jrt will be default impute missing data for partially missing data, but can be easily retrieved.

fit <- jrt(ratings_missing, irt.model = "PCM", silent = T) #fit model
#> Warning: Imputing too much data can lead to very conservative results. Use with
#> caution.
#> - Note : Person fit statistics based on imputed data! Use with caution!

The [email protected] contains both the original data and the data with imputation (variable names are tagged “original”” and “imputed”), as well as the factor scores.

To retrieved them separately, the imputed data can be retrieved with [email protected], the original data is in [email protected] and the factor scores can be retrieved like described previously.

Just using jrt for plotting?

You may want to use jrt as a plotting device only. That’s ok, because jrt plotting functions will accept mirt objects as input. They should be detected automatically as such (unidimensional models only).

Let’s fit a Generalized Partial Credit Model with mirt for this example.

fit <- mirt::mirt(data = mirt::Science, 
                  model = 1, 
                  itemtype = "gpcm",
                  verbose = F)

Now jcc.plot() can plot the category curves. Note that the default column names is now automatically switched to “Item”.

jcc.plot(fit)

For the information plot:

info.plot(fit)

For convenience the argument item can be used instead of judge in both plotting functions:

jcc.plot(fit, item = 3)

Even though it isn’t its primary purpose, jrt can also plot binary item response functions. They will be automatically detected and the plot will be named accordingly.

# SAT data from mirt
## Convert to binary
data <- mirt::key2binary(mirt::SAT12,
    key = c(1,4,5,2,3,1,2,1,3,1,2,4,2,1,5,3,4,4,1,4,3,3,4,1,3,5,1,3,1,5,4,5))
## Fit 2PL model in mirt
fit <- mirt::mirt(data = data, model = 1, itemtype = "2PL", verbose = F)
## Plotting an item response function
jcc.plot(fit, item = 2)

## Plotting the item response functions of the first 12 items with a larger theta range
jcc.plot(fit, facet.cols = 4, item = 1:12, theta.span = 5)