Taking advantage of the R programming language workshop at MI/AIR 2017 and some subsequent training, we have used R to simplify and automate (and make more reliable) some processes presented previously at MI/AIR. One was a method to calculate the incremental cost or benefit of running an undersized section, to aid in decision-making at the class cancellation meeting. Another was to retrieve grade book data from our Brightspace LMS and to use those data for student outcome assessment. It will be shown how R can perform the calculations more reliably and efficiently than relying completely on Excel. R also provides a higher degree of flexibility to change input parameters. Also planned is a procedure using R to automate and streamline the calculation and submission of Perkins Core Indicators.
Within research in higher education, we often learn about different ways to conduct analyses to study certain populations. However, a question begs to be answered about how applicable or valid these studies are if applied at our own institution using the same methodology. Simons (2013) states that reproducibility is the cornerstone of science and if an effect is reliable, any competent researcher should be able to obtain it when using the same procedures. That is the premise for this presentation as Saginaw Valley State University looked to replicate a study done by Oakland University using logistic regression to estimate the impact that unmet financial need has on student retention rates. The study will be re-visited, comparing results, methods, and outcomes across both of the studies. The presentation will end with a brief discussion on the reproducibility crisis in psychology and medicine, how this crisis impacts research within IR, and practical tips on how to make your own research more reproducible. References: Simons, D. J. (2013). The value of direct replication. Perspectives on psychological science. 9(1) pp. 76-80.
We all dutifully submit data through IPEDS and other NCES studies, but don’t always have the time to follow through to see where the data become available to the public. Where will Outcome Measures show up? How is College Navigator updated? What data does the College Scorecard use? What are the NCES tools that will help me provide insightful data to my institution? This presentation will provide an overview of of the connections between data submissions and the resulting data tools.
Ninety-eight percent of fall entering undergraduate students live on campus their first year at Michigan State University (MSU). By the second year, only fifty percent of these students live on campus. In this presentation, we explore differences in the characteristics and outcomes of students who live on campus their second year. We specifically evaluate differences in cumulative GPA, persistence rates, and graduation by second year residential choices. We control for selection bias through a variety of econometric techniques, specifically propensity score matching, coarsened exact matching, and multivariate regression on the matched sample dataset. Our results indicate that there are minimal impacts on student outcomes as a function of second year living, but outcomes vary by subgroups (race/ethnicity, Pell grant, and first generation). The different outcomes by subgroups indicates that policy changes to residential housing may effect both who lives on-campus housing and where students choose to live. We conclude the analysis by exploring additional data that can help us understand the relationship between residential housing and student outcomes, e.g. Sankey charts of student flow from one residence to another and the correlation between persistence and dining use and residential room changes.
IR offices are sometimes asked how students from a certain district perform; specifically, Wayne State University has recently focused upon the performance of students from Detroit Public Schools. Matching students to their school districts of origins can, however, be a Herculean task due to inconsistent data from admissions, school district instability, and lack of regional knowledge. The original, manual process that Wayne State University used to identify and match high schools in Detroit will be detailed. In order to expand our ability to tie students to their district of origin, a new methodology had to be developed. Using data sources from the State of Michigan, a SAS program was written using a combination of fuzzy matching methodologies (namely Soundex and Levenshtein distance) to reliably match official district and school type (public, charter, non-public) to admissions data.
Please use the form below to contact us with questions, concerns, or feedback about the site or any MI/AIR-related matters.