Connecting to MSFT SQL Server Using R and odbc on Mac OSX

Adding this here to save the steps for when I need to setup additional Mac machines or need to update one. The purpose of setting this up is to be able to connect to MSFT SQL Servers from a Mac directly from R.

  1. Install homebrew if not already installed.
  2. Install the version of the driver that you want to use. I use ODBC Driver 17 for SQL Server.
  3. Tell R where to find the driver by adding the path to the .Renviron file. To find the location of this file, run:
    R.home(component = "home")
    in R. Then, add this line to the .Renviron file
    ODBCSYSINI=/opt/homebrew/etc
    . (Note that you should “show hidden files” if you’re using Mac’s Finder.)
  4. Close R and start a new session. You should be good to go.

SRM_R: A Web-Based Shiny App for Social Relations Analyses

Wong, M. N., Kenny, D. A., & Knight, A. P. (In Press). SRM_R: A Web-Based Shiny App for Social Relations Analyses. Organizational Research Methods.

Abstract. Many topics in organizational research involve examining the interpersonal perceptions and behaviors of group members. The resulting data can be analyzed using the Social Relations Model (SRM). This model enables researchers to address several important questions regarding relational phenomena. In the model, variance can be partitioned into group, actor, partner, and relationship; reciprocity can be assessed in terms of individuals and dyads; and, predictors at each of these levels can be analyzed. However, analyzing data using the currently available SRM software can be challenging and can deter organizational researchers from using the model. In this article, we provide a “go-to” introduction to SRM analyses and propose SRM_R (https://davidakenny.shinyapps.io/SRM_R/), an accessible and user-friendly, web-based application for SRM analyses. The basic steps of conducting SRM analyses in the app are illustrated with a sample dataset of 47 teams, 228 members, and 884 dyadic observations, using the participants’ ratings of the advice-seeking behavior of their fellow employees.

zoomGroupStats released on CRAN

zoomGroupStats is now available as a package on CRAN.

Title: zoomGroupStats: Analyze Text, Audio, and Video from ‘Zoom’ Meetings
Description: Provides utilities for processing and analyzing the files that are exported from a recorded ‘Zoom’ Meeting. This includes analyzing data captured through video cameras and microphones, the text-based chat, and meta-data. You can analyze aspects of the conversation among meeting participants and their emotional expressions throughout the meeting.


# To use the stable release version of zoomGroupStats, use:
install.packages("zoomGroupStats")

# To use the development version, which will be regularly updated with new functionality, use:
library(devtools)
install_github("https://github.com/andrewpknight/zoomGroupStats")

You can stay up-to-date on the latest functionality on http://zoomgroupstats.org.

Package version of zoomGroupStats

Thank you all for the encouragement and feedback on the initial version of zoomGroupStats. I can’t believe it’s been a little over a year since I posted the first set of functions in the early days of COVID-19. Following the suggestions of several users, I took some time this past week to build this out as a more structured R package.

Accompanying the package, you will find a multi-part guide for conducting research using Zoom and using zoomGroupStats to analyze Zoom meetings using R.

The best way to use this resource currently, because I am actively building out new functionality, is to install it through my github repository. To do so:


library(devtools)
install_github("https://github.com/andrewpknight/zoomGroupStats", force=TRUE)
library(zoomGroupStats)

I’ll be updating the documentation, guidance videos, and adding further functionality in the weeks ahead. The best resource for zoomGroupStats going forward will be a dedicated package site, which you can access at http://zoomgroupstats.org.

Please keep the feedback and suggestions coming!

Materials from zoomGroupStats Tutorial Session

I facilitated a short workshop to give an introduction to using the zoomGroupStats set of functions. The tutorial covered three issues. First, I offered recommendations for how to configure Zoom for conducting research projects. Second, I described how to use the functions in zoomGroupStats to parse the output from Zoom and run rudimentary conversation analysis. Third, I illustrated how to use zoomGroupStats to analyze the video files output from Zoom.

If you weren’t able to make it, here are a few artifacts from the session:

Presentation materials, which provided the structure for the session (but do not include the demonstrations / illustrations)

zoomGroupStats Tutorial Code, which walked through using different functions in zoomGroupStats

Supplementary Tutorial Guide, which accompanied the session to provide additional recommendations

A faster way to grab frames from video using ffmpeg

In the zoomGroupStats functions that I described in this post, there is a function for identifying and analyzing faces. The original version that I posted uses ImageMagick to pull image frames out of video files. This is embedded in the videoFaceAnalysis function. In practice, this is a very inefficient method for breaking down a video file before sending it off to AWS Rekognition for the face analysis. I’ve found that ImageMagick takes quite a long time to pull images from a video.

As an alternative, I’ve been using ffmpeg to process the video files before using the zoomGroupStats functions. I love ffmpeg and have used it for years to manipulate and process audio and video files. After you have installed ffmpeg on your machine, you can use system(“xxxx”) in the stream of your R code to execute ffmpeg commands. For example, here’s what I include in a loop that is working through a batch of video files:

ffCmd = paste("ffmpeg -i ", inputPath, " -r 1/", sampleWindow, " -fimage2 ", outputPath, "%0d.png", sep="")

Then, you can just run system(ffCmd) to execute this line. In the line, inputPath is the path to the video file, sampleWindow is the number of seconds that you would like between each frame grab, and outputPath is the path to the directory, including an image name prefix, where you want the images saved.

Using a computer that isn’t very powerful (a Mac Mini), I was able to break down 20 ~2 hour videos (about 2400 minutes of video) into frame grabs every 20 seconds (around 7000 images) in less than an hour.

I will end up replacing ImageMagick with ffmpeg in the next iteration of the videoFaceAnalysis function. This will also come with the output of new metrics (i.e., face height/width ratio and size oscillation). Stay tuned!