How to download YouTube videos

As a professor, I frequently use YouTube videos (and other streaming videos) in my courses. For a variety of reasons, I do not want to stream the video during a given class session. For example, if I am teaching a session virtually on Zoom, I do not want to simultaneously download and upload the video. Additionally, if I am teaching a course in China, I may not have access to YouTube. Finally, YouTube videos sometimes disappear, are blocked, or have advertisements that I don’t want to play in class.

For all of these reasons, I almost always download YouTube videos before class sessions and play them through my local machine. I never stream them directly from YouTube.

While there are a range of semi-functional browser plug-ins for downloading YouTube videos, I have never been satisfied with them. Instead, I use a command line tool — youtube-dl–for pulling videos from YouTube. It is an outstanding piece of (free) software that will enable you to quickly pull down videos to include in your classes.

The best way to install youtube-dl, if you’re on a Mac, is to use Homebrew. After doing so, you can run the following command in your shell (replacing {url} with the website address of the video you want to download:


youtube-dl "{url}"

For example, the following command would pull down an *amazing* video about recurrence analysis 🙂


youtube-dl "https://www.youtube.com/watch?v=wKOd45GLY9Y"

There are so many other options for this software. I think this is a particularly useful guide to using youtube-dl.

zoomGroupStats released on CRAN

zoomGroupStats is now available as a package on CRAN.

Title: zoomGroupStats: Analyze Text, Audio, and Video from ‘Zoom’ Meetings
Description: Provides utilities for processing and analyzing the files that are exported from a recorded ‘Zoom’ Meeting. This includes analyzing data captured through video cameras and microphones, the text-based chat, and meta-data. You can analyze aspects of the conversation among meeting participants and their emotional expressions throughout the meeting.


# To use the stable release version of zoomGroupStats, use:
install.packages("zoomGroupStats")

# To use the development version, which will be regularly updated with new functionality, use:
library(devtools)
install_github("https://github.com/andrewpknight/zoomGroupStats")

You can stay up-to-date on the latest functionality on http://zoomgroupstats.org.

Package version of zoomGroupStats

Thank you all for the encouragement and feedback on the initial version of zoomGroupStats. I can’t believe it’s been a little over a year since I posted the first set of functions in the early days of COVID-19. Following the suggestions of several users, I took some time this past week to build this out as a more structured R package.

Accompanying the package, you will find a multi-part guide for conducting research using Zoom and using zoomGroupStats to analyze Zoom meetings using R.

The best way to use this resource currently, because I am actively building out new functionality, is to install it through my github repository. To do so:


library(devtools)
install_github("https://github.com/andrewpknight/zoomGroupStats", force=TRUE)
library(zoomGroupStats)

I’ll be updating the documentation, guidance videos, and adding further functionality in the weeks ahead. The best resource for zoomGroupStats going forward will be a dedicated package site, which you can access at http://zoomgroupstats.org.

Please keep the feedback and suggestions coming!

Materials from zoomGroupStats Tutorial Session

I facilitated a short workshop to give an introduction to using the zoomGroupStats set of functions. The tutorial covered three issues. First, I offered recommendations for how to configure Zoom for conducting research projects. Second, I described how to use the functions in zoomGroupStats to parse the output from Zoom and run rudimentary conversation analysis. Third, I illustrated how to use zoomGroupStats to analyze the video files output from Zoom.

If you weren’t able to make it, here are a few artifacts from the session:

Presentation materials, which provided the structure for the session (but do not include the demonstrations / illustrations)

zoomGroupStats Tutorial Code, which walked through using different functions in zoomGroupStats

Supplementary Tutorial Guide, which accompanied the session to provide additional recommendations

Addressing performance tensions in multiteam systems: Balancing informal mechanisms of coordination within and between teams

Ziegert, J. C., Knight, A. P., Resick, C. J., & Graham, K. A. (In Press). Addressing performance tensions in multiteam systems: Balancing informal mechanisms of coordination within and between teams. Academy of Management Journal.

Abstract. Due to their distinctive features, multiteam systems (MTSs) face significant coordination challenges—both within component teams and across the larger system. Despite the benefits of informal mechanisms of coordination for knowledge-based work, there is considerable ambiguity regarding their effects in MTSs. To resolve this ambiguity, we build and test theory about how interpersonal interactions among MTS members serve as an informal coordination mechanism that facilitates team and system functioning. Integrating MTS research with insights from the team boundary spanning literature, we argue that the degree to which MTS members balance their interactions with members of their own component team (i.e., intrateam interactions) and with the members of other teams in the system (i.e., inter-team interactions) shapes team- and system-level performance. The findings of a multimethod study of 44 MTSs composed of 295 teams and 930 people show that as inter-team interactions exceed intrateam interactions, team conflict rises and detracts from component team performance. At the system level, balance between intra- and inter-team interactions enhances system success. Our findings advance understanding of MTSs by highlighting how informal coordination mechanisms enable MTSs to overcome their coordination challenges and address the unique performance tension between component teams and the larger system.

A faster way to grab frames from video using ffmpeg

In the zoomGroupStats functions that I described in this post, there is a function for identifying and analyzing faces. The original version that I posted uses ImageMagick to pull image frames out of video files. This is embedded in the videoFaceAnalysis function. In practice, this is a very inefficient method for breaking down a video file before sending it off to AWS Rekognition for the face analysis. I’ve found that ImageMagick takes quite a long time to pull images from a video.

As an alternative, I’ve been using ffmpeg to process the video files before using the zoomGroupStats functions. I love ffmpeg and have used it for years to manipulate and process audio and video files. After you have installed ffmpeg on your machine, you can use system(“xxxx”) in the stream of your R code to execute ffmpeg commands. For example, here’s what I include in a loop that is working through a batch of video files:

ffCmd = paste("ffmpeg -i ", inputPath, " -r 1/", sampleWindow, " -fimage2 ", outputPath, "%0d.png", sep="")

Then, you can just run system(ffCmd) to execute this line. In the line, inputPath is the path to the video file, sampleWindow is the number of seconds that you would like between each frame grab, and outputPath is the path to the directory, including an image name prefix, where you want the images saved.

Using a computer that isn’t very powerful (a Mac Mini), I was able to break down 20 ~2 hour videos (about 2400 minutes of video) into frame grabs every 20 seconds (around 7000 images) in less than an hour.

I will end up replacing ImageMagick with ffmpeg in the next iteration of the videoFaceAnalysis function. This will also come with the output of new metrics (i.e., face height/width ratio and size oscillation). Stay tuned!

Spring 2021 Courses

People Metrics
Open to BSBA, MBA, and Specialized Masters Students

Metrics are at the core of people analytics. The purpose of this course is to introduce you to the foundations of assessing behavior in organizations using novel measurement approaches and large datasets. Through classroom discussions and real-world applications, this course will enable you to add value to organizations through the development, use, and interpretation of innovative people metrics. Specifically, after taking this course, you will be able to:

  • Develop a clear and logical conceptual measurement model. A conceptual measurement model is the foundation of creating novel and useful new approaches for assessing intrapersonal characteristics (e.g., personality) and interpersonal behavior (e.g., knowledge sharing, teamwork).
  • Identify novel sources of data for innovative people metrics. Organizations are awash in the traces of individual behavior and social interactions. Decoding how data that already exist in an organization can be used to understand behavior is an essential skill for adding value in the field of people analytics.
  • Apply a rigorous process for validating new people metrics. Developing a measurement model and finding sources of data are necessary, but insufficient for adding value through people metrics. New measures must be validated.

Fall 2020 Courses

Foundations of Impactful Teamwork
Required Course for 1st Year MBA Students

Working effectively in and leading teams are essential competencies in modern organizations, both large and small. The purpose of this course is to lay a foundation of knowledge and skills that will enable you to differentiate yourself as an effective leader and member of impactful teams. The specific learning objectives for this course include:

  • Be able to launch and lead goal-directed project teams that meet or exceed stakeholders’ expectations for task performance, provide a positive working experience for team members, and enable team members to grow as a unit and as individuals.
  • Be able to diagnose common interpersonal challenges that arise in teams composed of diverse individuals who are working under pressure and relying heavily on virtual modes of collaboration.
  • Refine your awareness of your strengths and weaknesses as a leader and develop a plan for honing your leadership identity and interpersonal skills during your MBA program.
  • Augment your resourcefulness when working in a global virtual team.

Organizational Research Methods
Doctoral Course

The purpose of this course is to expose you to a range of methods for conducting research on organizations. We will do this through readings, class discussions and exercises, as well as through writing and reviewing one another’s work. Because this is a survey course, we will cover a range of topics and specific research methods. The objectives of the course are:

  • Introduce you to general concepts of methodological rigor and the core foundations of measurement.
  • Enhance your understanding of the suite of methods commonly used in organizational research.
  • Improve your skill in critically consuming research from a variety of methodological approaches.

Use R to Transcribe Zoom Audio files for use with zoomGroupStats

The zoomGroupStats functions that I’ve been building over the past few months have, to date, relied heavily on the transcription that is created automatically when a meeting is recorded to the Zoom Cloud. This is an excellent option if your account has access to Cloud Recording; however, it can be an obstacle if you want meeting leaders to record their own meetings (locally) and send you the file. In a recent project, for example, I had many meeting leaders who accidentally recorded their meetings locally, which left me without a transcript of the meeting.

This week I’ve started building a set of functions to take in an audio file from a Zoom meeting (or could also take in the video file, but that is unnecessary) and output the same transcript object that the processZoomTranscript function in zoomGroupStats produces. These functions rely on AWS Transcribe and S3. There are currently just two functions — one that launches a transcription job (since these are done asynchronously) and the second that parses the output of the transcription job.

Note that these functions currently use the default segmenting algorithm in AWS transcribe. From reviewing several transcriptions, it’s not very good (in my opinion). If your work requires utterance-level analysis (e.g., average utterance length), I would consider defining your own segmentation approach. The functions will output a simple text file transcript, so you could use that to do a custom segmentation.

############################################################
# transcribeZoomAudio Function
############################################################

# Zoom Audio File Processing, Function to launch transcription jobs
# This function starts an audio transcription job only == it does not output anything of use. However,
# it is useful for batch uploading audio files and starting transcription jobs for them.

# This can be done with a local file (uploads to a specified s3 bucket) or with a file that already
# exists in an s3 bucket

# example call:             transcribeZoomAudio(fileLocation="local", bucketName="my-transcription-bucket", filePath="mylocalfile.m4a", jobName="mylocalfile.m4a", languageCode="en-US")

# INPUT ARGUMENTS:
# fileLocation:             either "local" or "s3" - if local, then this function will upload the file to the specified bucket
# bucketName:               name of an existing s3 bucket that you are using for storing audio files to transcribe and finished transcriptions
# filePath:                 the path to the local file or to the s3 file (depending on whether it is "local" or "s3")
# jobName:                  the name of the transcription job for aws -- I set this to the same as the filename (without path) for convenience
# numSpeakers:              this helps AWS identify the speakers in the clip - specify how many speakers you expect
# languageCode:             the code for the language (e.g., en-US)

# OUTPUT:
# None

transcribeZoomAudio = function(fileLocation, bucketName, filePath, jobName, numSpeakers, languageCode) {
    require(paws)

    # First, if the file location is local, then upload it into the
    # designated s3 bucket
    if(fileLocation == "local") {
        localFilePath = filePath
        svc = s3()
        upload_file = file(localFilePath, "rb")
        upload_file_in = readBin(upload_file, "raw", n = file.size(localFilePath))
        svc$put_object(Body = upload_file_in, Bucket = bucketName, Key = jobName)
        filePath = paste("s3://", bucketName, "/",jobName, sep="")
        close(upload_file)
    }

    svc = transcribeservice()  
    svc$start_transcription_job(TranscriptionJobName = jobName, LanguageCode = languageCode, Media = list(MediaFileUri = filePath), OutputBucketName = bucketName, Settings = list(ShowSpeakerLabels=TRUE, MaxSpeakerLabels=numSpeakers))
}


############################################################
# processZoomAudio Function
############################################################

# Zoom Audio File Processing, process finished transcriptions
# This function parses the JSON transcription completed by AWS transcribe.
# The output is the same as the processZoomTranscript function.

# example call:             audio.out = processZoomAudio(bucketName = "my-transcription-bucket", jobName = "mylocalfile.m4a", localDir = "path-to-local-directory-for-output", speakerNames = c("Tom Smith", "Jamal Jones", "Jamika Jensen"), recordingStartDateTime = "2020-06-20 17:00:00", writeTranscript=TRUE)

# INPUT ARGUMENTS:
# bucketName:               name of the s3 bucket where the finished transcript is stored
# jobName:                  name of the transcription job (see above - i usually set this to the filename of the audio)
# localDir:                 a local directory where you can save the aws json file and also a plain text file of the transcribed text
# speakerNames:             a vector with the Zoom user names of the speakers, in the order in which they appear in the audio clip.
# recordingStartDateTime:   the date/time that the meeting recording started
# writeTranscript:          a boolean to indicate whether you want to output a plain text file of the transcript           

# OUTPUT:
# utterance_id:             an incremented numeric identifier for a marked speech utterance
# utterance_start_seconds   the number of seconds from the start of the recording (when it starts)
# utterance_start_time:     the timestamp for the start of the utterance
# utterance_end_seconds     the number of seconds from the start of the recording (when it ends)
# utterance_end_time:       the timestamp for the end of the utterance
# utterance_time_window:    the number of seconds that the utterance took
# user_name:                the name attached to the utterance
# utterance_message:        the text of the utterance
# utterance_language:       the language code for the transcript



processZoomAudio = function(bucketName, jobName, localDir, speakerNames=c(), recordingStartDateTime, writeTranscript) {
    require(paws)
    require(jsonlite)

    transcriptName = paste(jobName, "json", sep=".")
    svc = s3()
    transcript = svc$get_object(Bucket = bucketName, Key = transcriptName)
    # Write the binary component of the downloaded object to the local path
    writeBin(transcript$Body, con = paste(localDir, transcriptName, sep="/"))
    tr.json = fromJSON(paste(localDir, transcriptName, sep="/"))

    if(writeTranscript) {
        outTranscript = paste(localDir, "/", jobName, ".txt", sep="")
        write(tr.json$results$transcripts$transcript, outTranscript)
    }

    # This IDs the words as AWS broke out the different segments of speech
    for(i in 1:length(tr.json$results$speaker$segments$items)){

        res.line = tr.json$results$speaker$segments$items[[i]]
        res.line$segment_id = i
        if(i == 1) {
            res.out = res.line
        } else {
            res.out = rbind(res.out, res.line)
        }

    }

    segments = res.out 
    segment_cuts = tr.json$results$speaker$segments[,c("start_time", "speaker_label", "end_time")] 

    # Pull this apart to just get the word/punctuation with the most confidence
    # Not currently dealing with any of the alternatives that AWS could give
    for(i in 1:length(tr.json$results$items$alternatives)) {

        res.line = tr.json$results$items$alternatives[[i]]

        if(i == 1) {
            res.out = res.line
        } else {
            res.out = rbind(res.out, res.line)
        }

    }

    words = cbind(res.out, tr.json$results$items[,c("start_time", "end_time", "type")])
    words = words[words$type == "pronunciation", ]
    words_segments = merge(words, segments, by=c("start_time", "end_time"), all.x=T)

    words_segments$start_time = as.numeric(words_segments$start_time)
    words_segments$end_time = as.numeric(words_segments$end_time)

    words_segments = words_segments[order(words_segments$start_time), ]
    segment_ids = unique(words_segments$segment_id)
    i = 1


    segment_cuts$utterance_id = NA
    segment_cuts$utterance_message = NA
    for(i in 1:length(segment_ids)) {
        utterance_id = segment_ids[i]
        segment_cuts[i, "utterance_id"] = utterance_id     
        segment_cuts[i, "utterance_message"] = paste0(words_segments[words_segments$segment_id == utterance_id, "content"], collapse=" ")
    }  

    if(length(speakerNames) > 0) {
        user_names = data.frame(0:(length(speakerNames)-1), speakerNames, stringsAsFactors=F)
        names(user_names) = c("speaker_label", "user_name")
        user_names$speaker_label = paste("spk",user_names$speaker_label, sep="_")
        segment_cuts = merge(segment_cuts, user_names, by="speaker_label", all.x=T)
    }

    names(segment_cuts)[2:3] = c("utterance_start_seconds", "utterance_end_seconds")
    segment_cuts[, 2:3] = lapply(segment_cuts[, 2:3], function(x) as.numeric(x))
    segment_cuts = segment_cuts[order(segment_cuts$utterance_start_seconds), ]

    # Now turn these into actual datetime values
    recordingStartDateTime = as.POSIXct(recordingStartDateTime)
    segment_cuts$utterance_start_time = recordingStartDateTime + segment_cuts$utterance_start_seconds
    segment_cuts$utterance_end_time = recordingStartDateTime + segment_cuts$utterance_end_seconds

    # Create a time window (in seconds) for the utterances -- how long is each in seconds
    segment_cuts$utterance_time_window = as.numeric(difftime(segment_cuts$utterance_end_time, segment_cuts$utterance_start_time, units="secs"))

    # Prepare the output file
    res.out = segment_cuts[, c("utterance_id", "utterance_start_seconds", "utterance_start_time", "utterance_end_seconds", "utterance_end_time", "utterance_time_window", "user_name", "utterance_message")]

    # Mark as unidentified any user with a blank username
    res.out$user_name = ifelse(res.out$user_name == "" | is.na(res.out$user_name), "UNIDENTIFIED", res.out$user_name)      

    # Add the language code
    res.out$utterance_language = languageCode

    return(res.out)    

}

Meeting Measures: Feedback from Zoom

I created a website to give feedback to people on their virtual meetings. This website (http://www.meetingmeasures.com) relies on the code I’ve shared in past posts on how to quantify virtual meetings. The purpose of the site is to (a) unobtrusively capture people’s behavior in virtual meetings, (b) give people feedback on their presence and contributions in virtual meetings, and (c) suggest ways to improve their leadership and/or engagement in virtual meetings. There are currently options to incorporate survey data into the dashboard, as well.

This was a fun project to build. So far, I’ve administered > 100 meetings through the website. If you are interested in partnerships that involve the potential for research on virtual meeting behavior, please reach out.