Innovation implementation: Overcoming the challenge

Klein, K. J., & Knight, A. P. (2005). Innovation implementation: Overcoming the challenge. Current Directions in Psychological Science, 14, 243-246.

Abstract. In changing work environments, innovation is imperative. Yet, many teams and organizations fail to realize the expected benefits of innovations that they adopt. A key reason is not innovation failure but implementation failure—the failure to gain targeted employees’ skilled, consistent, and committed use of the innovation in question. We review research on the implementation process, outlining the reasons why implementation is so challenging for many teams and organizations. We then describe the organizational characteristics that together enhance the likelihood of successful implementation, including a strong, positive climate for implementation; management support for innovation implementation; financial resource availability; and a learning orientation.

Writing a new method for nlme

I recently took some time to figure out how to write a new method for nlme to enable structuring the variance-covariance matrix of the random effects in a specific way. My goal here was to be able to run Dave Kenny’s social relations model (Kenny, 1994) using multilevel modeling and the approach described by Snijders and Kenny (1999). Taking this approach requires “tricking” the software in a way through the use of dummy variables and constraints on the variance-covariance matrix.

Figuring out how to write a new method was more challenging than I had initially expected. There are many twists and turns in lme and it took quite a bit of time to reverse engineer the software to figure out what was going on. Unfortunately, there isn’t great documentation on the web for this process.

As part of my process, I created my own replication of one of the existing methods–pdCompSymm. I went through and commented each part of the different functions that are called, explaining my interpretation of what is going on. As you can see, there are some places where I’m just off and don’t really know what’s going on. I also converted some of the C code in nlme for running pdCompSymm into R code (this is the pdFactor.pdCompSymm function).

In the end, I was able to figure out enough of it to succeed in my goal of creating a new method for the social relations model through multilevel modeling in R. You can find this on my github page. I‘ve called it pdSRM and it has some comments at the top that explain how to use it.

One lesson learned from this is that it is challenging–but not impossible!–to specify a structure for the variance-covariance matrix using nlme that is not already in the generic methods that are provided. I also learned a ton about how lme is working behind the scenes. This took a bunch of time, but did pay off in the end.

Trying out Sublime Text 3

I’ve been a huge fan of TextWrangler for years. I use it for all of my coding (including with R), for taking notes during meetings, for taking notes on articles, and more. It’s my most frequently used application. But, I’m ready for a change and am going to give Sublime Text 3 a try. It seems very powerful, relatively light, and incredibly extensible. 

I recently installed Sublime Text 3, added Package Control, and installed some R packages to try out. I’ll give it a try for the next three weeks and see what I think. It might be time to say farewell to TextWrangler.

Recurrence Quantification Analysis

This page has some wonderful resources for recurrence analysis. One particularly useful resource on this site is the listing of software options for conducting recurrence analysis. After a fair amount of searching, I couldn’t find an R package that computed the metrics from a recurrence quantification analysis. The tseriesChaos package provides a function for producing recurrence plots; but, I didn’t see anything for quantifying these plots.

After digging through the different software options listed on this site, I tried out and really like the Commandline Recurrence Plots script offered by Norbert Marwan himself.

The script was very easy to setup on my Mac and, by using Rscript it was easy to combine with R code to (a) draw specific chunks of data for different individuals in my dataset; (b) compute and output the recurrence quantification metrics; (c) output the recurrence plot dataset for creating the actual plot; and, (d) producing the plot and creating a dataset of metrics.

I’ll clean up, comment, and post the code that I used as soon as I can come up for air.

R Graphics Parameters — Rows and Columns

For some reason I always forget the code for setting R’s graphics parameters. And, I always need this same line. So, now I shan’t forget it.

<br />quartz(type="pdf",file="figure_NUM.pdf")<br />par(mfrow=c(3,2), cex=1, mar=c(2,2,2,2))<br /><br />

Convert CD tracks to mp3 using ffmpeg

Just a small chunk of code to convert CD tracks (aiff) to mp3 files:

<br />#!/bin/bash<br />for i in {1..12}<br />do<br />  ffmpeg -i ${i}.aiff -f mp3 -acodec libmp3lame -ab 192000 -ar 44100 ${i}.mp3<br />done<br /><br /><br />

Ruby code to parse and combine text files

I use this ruby code to parse several tab-delimitted text files that contain individual raters’ perceptions of a target (in this case a video). The rater id is embedded in the filename. The target video number is also embedded in the filename.

<br />#! /usr/bin/env ruby<br /><br />out = Dir.glob('*.txt')<br /><br /># open the file to write to and add the column headers<br />columns = "grouptratertmintengagetpreparetdivergetconvergetexecutetcentralizetattentivettonetactivationn" <br />"./all_ratings.txt", 'w') { |f| f.write(columns) }<br /><br />out.each do |filename|<br />  rater = filename.split('.')[0].split('_')[0]<br />  group = filename.split('.')[0].split('_')[1]  <br /> <br />  # Assign a number for the rater<br />  case rater.downcase<br />    when "rater1"<br />      rater_id = 1<br />    when "rater2"<br />      rater_id = 2<br />    when "rater3"<br />      rater_id = 3<br />    when "rater4"<br />      rater_id = 4<br />    end<br />    puts "rater: " + rater + "(#{rater_id})" + " group: " + group<br /><br />    # Open the file<br />    f =, "r").read<br /> <br />    # Split by lines - This will make sure that the end of line from Mac Classic is n<br />    str = f.gsub!(/rn?/, "n").split("n")<br /> <br />    # Identify the line number that starts the data entry for this file by finding a specific expression in the text of the file<br /> <br />    linenum = 0<br />    exp = "- Low marked by sluggishness"<br />    line = str[linenum]<br />    puts line<br />    until line.include?(exp)    <br />      line = str[linenum] <br />      linenum += 1<br />    end<br /> <br />    linenum.upto(linenum+30) do |currentline|<br />      min = (currentline-linenum)+1<br />      # add the rater_id and the group_id to the line<br />      line = group.to_s + "t" + rater_id.to_s + "t" + str[currentline] + "n"<br />"./all_ratings.txt", 'a') { |f| f.write(line) }<br />    end<br />end<br /><br />

Copy files from incrementally-numbered drives

This code moves through drives (attached via USB) that are numbered incrementally and copies the files on the drives to the local hard disk. I’m using this to more quickly pull the data off of a number of Affectiva Q-Sensors, which I connect to my computer with a USB hub.

<br />#!/bin/bash<br />for i in {1..20}<br />do<br />  # Create the directory<br />  mkdir "./sensor_data/${i}"<br />  # Check to see if the volume is mounted<br />  drive="Q${i}"<br />  if mount|grep $drive;<br />  then<br />    echo "${drive} is mounted"<br />    # move the files over to the directory<br />    cp -r /Volumes/${drive}/ ./sensor_data/${i}/<br />  else<br />    echo "${drive} is NOT mounted"<br />  fi<br />done<br /><br /><br /><br /><br />

Create a filled line graph with R

I used the code below to create a presentation-quality graph of data on how individual’s activation level (measured as electrodermal activity) changes over time during a group-based task. (Click on the image to enlarge.)

<br />quartz(width=16,height=8,type="pdf",file="indiv_eda_z.pdf",dpi=600)<br />par(xpd=TRUE)<br />par(family="Tahoma", bg="white", mar=c(3,3,3,3), mgp=c(1,1,1))<br /><br />ylim <- c(-1.5, 1.5)<br />xlim <- c(-540, 2700)<br />x <- aggsub$task_time<br />y <- aggsub$eda_z<br />ylo <- rep(min(y), length(y))<br />plot(x,y, type="n", ylim=ylim, axes=FALSE, ylab="Electrodermal Activity", xlab="Time (in minutes)", xlim=xlim, col="dark green")<br /><br />xpos <- seq(-540, 2700, 180)<br />lab <- seq(-9, 45, 3)<br />axis(1, at=xpos, labels=lab, cex =1.5, lwd=.5, lty=3, tck=1, col="dark gray", pos=-1.5, col.axis="dark gray")<br /><br />ypos <- seq(-1.5, 1.5, .5)<br />axis(2, at=ypos , labels=ypos, cex =1.5, las=2, tck=1, lwd=.5, lty=3, col="dark gray", pos=-540, col.axis="dark gray")<br /><br />zerox <- -540:2700<br />zeroy <- rep(0, length(zerox))<br />lines(zerox, zeroy, lty=1, lwd=2, col="red")<br /><br /><br />lines(x,y, lwd=2.5,col="dark green")<br />xx <- c(x, rev(x))<br />yy <- c(ylo, rev(y))   <br />polygon(xx, yy, col="light green", border=FALSE)<br /><br />sessionstart <- min(x)<br />taskstart <- 0<br />taskend <- 1800<br />recordend <- 1920<br />sessionend <- max(x)<br /><br />polygon(c(sessionstart, sessionstart, taskstart, taskstart), c(1.5, min(y), min(y), 1.5), col="#0015FF25", border=FALSE)<br />text(-270, 1.25, "Pre-Task Survey", col="dark blue")<br /><br /><br />#polygon(c(taskstart, taskstart, taskend, taskend), c(1.5, min(y), min(y), 1.5), col="#EAFF0025", border=FALSE)<br />text(900, 1.25, "Group Members Work to Develop Recruitment Video", col="dark green")<br /><br /><br />polygon(c(taskend, taskend, recordend, recordend), c(1.5, min(y), min(y), 1.5), col="#FA050535", border=FALSE)<br />text(recordend-(recordend-taskend)/2, 1.25, "RecordnVideo", col="red")<br /><br /><br />polygon(c(recordend, recordend, sessionend, sessionend), c(1.5, min(y), min(y), 1.5), col="#0015FF25", border=FALSE)<br />text(sessionend-(sessionend-recordend)/2, 1.25, "Post-Task Survey", col="dark blue")<br /><br /><br />