Wednesday, March 27, 2019

Looking at Discount, 2016

If you want to strike fear into the hearts of enrollment managers everywhere, just say, "The trustees want to talk about the discount rate."

If you don't know, the discount rate is a simple calculation: Institutional financial aid as a percentage of tuition (or tuition and fees) revenue.  If your university billed $100 million in tuition and fees, and awarded $45 million in aid, your discount is 45%.  In that instance, you'd have $55 million in hard cash to run the organization.

Discount used to be a reporting function, something you would look at when the year was over to see where you stood.  Now, it's become a management target. And that's a problem.  If you want to know why, read this quick little explanation of Campbell's Law. The short explanation is this: If you want to lower discount--if that's really the thing you are after--you can do it very easily.  Just shrink your enrollment.  Or lower your quality, as measured by things like GPA and test scores. Easy.

Of course, this is generally not what people mean when they say they want to decrease the discount rate.  They usually mean "decrease the discount and keep everything else the same, or better yet, improve those measures."  That's not so easy.  The simple reason is that decreasing your discount means you're raising price.  And we all know what happens when you raise price, unless you turn your college into a Giffen good which you can't do, of course.

What people really want is more net revenue: that $55 million in the example above.  You'd probably like to have it be $57 million, which would mean you lower your discount rate to 43%.  That happens because you either charge students more, or enroll more students who bring external aid, like Pell or state grants.  You don't care, really.  Cash is cash.

The absurdity of discount was demonstrated to me by a finance professor friend, who said back in the late 90's, "If we generate $12,000 in average net revenue on an $18,000 tuition (a 33% discount), let's propose raising tuition to $100,000 and the discount to 80%."  Yes, believe it or not, the denominator is important when calculating percentages, which is why it's hard to compare discounts in a meaningful way for competitors who charge more.)

If you're interested, here's a little presentation I did on why colleges have tended to increase discount and net revenue at the same time.  This exercise is probably close to the breaking point, however.

Now that you understand a little more about discount, on to the data. This is from the IPEDS data for Fall, 2016, the most recent available showing both aid and admissions data.  There are four views, using the tabs across the top.

View 1: Discount overview

No interactivity: Just average discount rates by Carnegie type, Region, and Urbanicity.  I think the bottom one is the most fascinating discovery I've come across yet.  Just by playing with the data.

View 2: Discount by Market Category

This one combines the three categories above: Carnegie, Region, and Urbanicity into a single category to see how discounts play out.  In order to be included in this, there had to be at least ten colleges in the category.  You can see that the highest discount, on average, is Baccalaureate institutions in distant towns in the South Central region of the US.  You can color this by any of the three individual categories using the little control at the top right.

View 3: Individual Colleges

This lists all the private colleges for which I could calculate a freshman discount rate and net revenue per freshman.  The controls at the top allow you to look at schools like yours, if you want.  Note the slider at top right: I started showing freshman classes of at least 200, as some small college data gets a bit funky.  You can expand or narrow that by pulling the sliders to your heart's content.

Sort by any column by hovering over the little icon in the x-axis label.  If you get in trouble, you can always reset using the arrow control at lower right.

View 4: Multidimensional

Each college in this view is a bubble, arrayed on the chart in two dimensions: Freshman Discount and Average net revenue per freshman.  The size of the bubble shows freshman selectivity (bigger is more selective).  The color of the bubble shows the percentage of freshmen with institutional aid.  Note that the highest net revenue institutions are also the most selective, suggesting people will pay for prestige (or prestige and wealth pave the way to admissions). And the lowest net revenue institutions are dark blue, showing almost everyone getting institutional aid (either "merit" or "need-based" although those distinctions are silly.)

Use the filters to limit the colleges on the view, and use the highlight function (just start typing) to highlight the institution of your choice.  Note especially what happens when you limit the view to colleges with higher tuition.  Go ahead.  You won't break anything.

As always, let me know what you see.

Thursday, March 21, 2019

Varsity Blues and The Real Admissions Data

If you are at all interested in college admissions, you are perhaps already sick of the coverage of the Varsity Blues Scandal, in which some parents allegedly conspired to break the law to get unfair advantages for their children in the admissions process.

Almost no one thinks this behavior is appropriate.  Almost no one.

There have been calls for reform in college admissions, and it's clear that this scandal has exposed some weak spots in the process.

At some institutions.  OK, a handful of institutions.  If you had your thumb removed.

And even then, it's a coach or two at those institutions who gave into greed and decided to take the money in exchange for a greased path to admission and a fake spot on the team for these students, some of whom apparently had no idea about the machinations behind the scenes on their behalf.

Admittedly, as we get deeper into discovery on this, we may find that the cheating on the SAT and ACT scandal goes far deeper than we would have anticipated; this will create many more problems for probably many more universities.  But interestingly, the most zealous users of the SAT and ACT are the very institutions everyone is fascinated with, and this is what they get for putting faith in a test a) originally designed to keep Jews out of the Ivy League that is b) now produced by private companies accountable to no one, and that c) measures wealth and ethnicity better than academic potential. (The Institutional Research Office at Yale knew this as early as the mid-60's and fortunately for all of us, put it on paper and sent it to the archives.)

However, in this instance, I have to be uncharacteristically charitable to College Board and ACT, as their good faith efforts to make reasonable accommodations to students with diagnosed learning differences was--and still can be, apparently--exploited by a handful of parents with money and access to willing psychologists and psychiatrists, who should also be punished.

Those of us who work in higher education have long since given up trying to corral the media fascination with this handful of institutions and their quirky ways.  But the calls for reform suggest that admission to college in the US is extraordinary competitive, so seeing the scope and context from a high level is still important, I think.

So, this:

Four views of admissions data from 2001--2017, all interactive and filterable to your heart's content.  Dive right in, or if you've never interacted with Tableau software before, take a few minutes to learn how to interact.

1st Tab: (tabs are across the top): The three bars represent freshman applications, admissions, and enrollments at all four-year, public and private not-for profit institutions in America, from 2001--2017 that are not open enrollment.  The number of institutions varies a tiny bit over time, but nothing that makes this analysis any different, fundamentally.

The orange line represents the aggregate admissions rate (percentage of applicants admitted).  If you want to look at a subset of these institutions, use one of the controls at the top: Look at just public universities, or just colleges in New England, for instance, or use the filters in any combination.

2nd Tab: Compare any four institutions to each other.  The view starts with four highly regarded Big 10 institutions, but you can use the drop down boxes to choose any four institutions you wish, from Abilene Christian to Youngstown State.

3rd Tab: This shows the universe broken into groups of colleges by selectivity.  Use the controls to see how many applications, admissions, or enrolls at each band of colleges (Most Selective is a 2017 Admit Rate of 10% or less, for instance; Extremely includes all colleges with admit rates between 10% and 25% in that year. And so on.)

The top chart shows raw numbers; the middle chart shows percent of total, and the bottom shows how many colleges in each category.  Hint: Dark blue is the most selective group of colleges--the ones everyone talks about.  And remember, this data doesn't even include Community Colleges (who do not report admissions data to IPEDS).

4th Tab: This shows four key metrics for all colleges, and can be broken out (that is the lines separated and colored) by several different categories using the control at top right, and can be filtered to show only certain types of colleges, using the controls in the middle of the right-hand column.

Draw rate (the bottom chart) is especially important, because it's a measure of market power, calculated by the yield rate/admit rate.  If I might make a suggestion: Notice national averages or draw rates, and then look at it by the selectivity categories.  While almost every college strives to raise this rate, note who has: The ones everyone talks about.  Chicken, meet egg.  You two fight it out.

As always, you can't break anything.  The little reset arrow in the lower right is your friend, so use it if you get stuck.

And let me know what you think.



Sunday, March 10, 2019

Looking at "Discrepant Scores"

Several years ago, The College Board produced a study of "discrepant performance," after studying about 150,000 students and their freshman-year grades in college.  If you want to see the study, you can get a pdf of it here. The title, of course, is interesting. (And before we get too deep, it's important to note that these are old SAT scores, in case you think the new test is better and want to argue that point, nice tight concordance between the old and the new notwithstanding.)

Discrepant performance is defined as standardized test scores that are inconsistent with a student's academic performance in high school.  The distributions of scores and grades were normalized, and then each student's z-score on tests was compared to their z-score on grades.  Just under two-thirds of students had scores and grades that were consistent, and we don't need to talk too much more about them.

The most interesting thing is the other one third of students: Those whose grades were higher than their tests (High GPA) and those whose tests were higher than their grades (High SAT).  Each group was about one-sixth (17.5%) of the sample.

Back to The College Board presentation for a moment.  It suggests that high testers with low grades are a better risk than low testers with high grades, but it also says grades are a better predictor by themselves than tests.  That's not statistically impossible, of course, but it does seem curious.  And it does fly in the face of both my experience and conventional wisdom regarding the best way to make admissions decisions; I think most admissions officers believe high tests/low grades are the weaker bet of the two extremes.

But let's go with that for a minute.  And let's ask why it might be true. But first, let's argue with what little methodology is presented here in this study.  A lot of the conceptual problem in predicting human performance, of course, comes from our own arrogance: In this case, the belief that a limited number of pre-college factors represent the sum total of factors affecting freshman grades.  How limited, in this case?  Two.

If you really wanted to get a good model to predict freshman performance, you'd look at a lot of factors: Family income, parental attainment, ethnicity of the student vis-à-vis the student body and vis-à-vis the high school they came from, just to name a few.  All of those factors are important, and what we find is that students from lower-income families whose parents didn't go to college, and who feel out of place in the college they've enrolled have some struggles.  I don't see any of these factors controlled for in this analysis (if I'm wrong I'll be happy to correct it.)

You can see the table of how discrepant performance breaks out, but you can you really see it?  Let me draw you a picture.  On this chart, (which shows only the students with discrepant performance), the light blue bar on the left chart shows the number of students with high tests and lower GPA (High SAT); the orange bar on the left chart show the number of students with low tests and high GPA (High GPA).  Hover over the bars to see how many there are.  One the right chart, the mauve colored bar shows what percentage of each group had high SAT. (The ones The College Board says you should give the breaks to).



Surprise: Guess who tends to have higher grades and lower scores?  Women (who get better grades at every level of education than me, by the way); poorer students; students from underrepresented ethnic groups, and students whose parents have less education.  This narrative plays smoothly into the prevailing wisdom of 1930, which suggested they just were not suited for higher education, and which some people still seem to believe.

Who has higher scores and lower grades? Men, white and Asian students, wealthier students, and children of well educated parents.  And The College Board statistics tell you these are the students you should give a break to in the admissions process because they did better on a three-hour test.  You see, in the simple approach, only SAT scores and GPA determine your college performance, and it's not at all affected by how much you have to work, or worry about money, or spend time figuring out how college operates, or whether you belong there.  So keep giving the white guys the break.

Two final points: If I took the labels off the bars, and told you "This is the gender chart," or "This is the income chart" you could probably put the labels on in correct order pretty easily.  Second (and this could be a whole other blog post all together) even the poorest students (a B- average) with the lowest test scores ended the first year with an average GPA of 2.0, and the differences between and among the groups are exaggerated by a truncated y-axis on the chart in the presentation.

As always, let me know what you think.

Monday, February 18, 2019

Pell and Non-Pell Graduation Rates

Much has been made recently of the attempts by colleges to increase the enrollment of Pell-eligible students.  For those who don't know, the Pell Grant is the federal grant awarded to students with the highest financial need.  In fact, the pressure may be backfiring, in a classic case of Campbell's law.

Regardless, given the state of federal reporting requirements (why can't the FISAP be in IPEDS??), this blunt tool is still the best one we have widely available to help take stock of the economic diversity of enrolling students.

So this is where we are.

This morning, Robert Kelchen sent this tweet about the data he uses to measure grad rate gaps between Pell and Non-Pell recipients.  I asked him for it, and he graciously shared it right away.  I spent 30 minutes to visualize it (for our own internal use, mostly), and made it better for others who might want to take a look.

On the first view, four data points are displayed: The college's grad rate for Pell (light blue) and Non-Pell (dark blue) on the left; the percentage in the measured freshman cohort in purple in the center; and the gap, in percentage points.    The identical chart at the bottom breaks it out by sector.

I recommend you use the filters at the top to limit the top chart by a)  the size of the cohort (for instance, between 500 and 5,000), and then by sector.  For these two filters, the bottom chart will not change.  However, if you want to look at a specific state, using that filter will affect both the top and bottom.

If you want to sort the data by either the red or purple bars, hover over the top of the column, and click on the small icon that appears.  Sort descending, ascending, or alphabetical on consecutive clicks.

On the second chart is a mostly a nothing burger: I was curious to see if the percentage of Pell students  in the cohort had an effect on the gap.  As you can see, it doesn't.  On this chart, type and select any institution to see it highlighted.

And, as always, let me know what you see.



Friday, February 1, 2019

Doctoral Recipients, 2013--2017

This data has long been of interest to high school counselors, and of course, I decided to update it at the worst possible time: During the recent shutdown of the federal government.  I found the NSF website shuttered.

Fortunately, the Polar Vortex gave almost everyone in Chicago a two-day break shorty after the government re-opened, and there was not much to do with the windchill approaching -60° F; but the government had re-opened, and the data were available.  So here you go.

There are two simple views of doctoral education here: The first is the undergraduate institution of doctoral recipients from 2013 to 2017.  You can use the controls at the top to limit your view to public or private; Carnegie type; State, or HBCU status.  If you want to, you can also focus on a single year or range of years using the sliders.

For instance, if you wanted to look at how many graduates of Baccalaureate institutions in California received a doctorate in chemistry in 2014, just play around until you get there. (The top college may surprise you!)

The second view is similar, but shows the universities awarding the doctorate.  The filters work the same way.

Let me know what you find interesting here.

Monday, January 14, 2019

Yes, Enrollment is Going Down. Also up.

When designing a data visualization, the first thing to ask is, "What does the viewer want to see, or need to know?"  If you're designing a dashboard for a CFO or a CEO or a VP for Marketing, those things are pretty straight forward: You're designing for one person and you have a pretty good idea what that person wants.

But in higher education, we want to look at segments of the industry, and trends that are specific to our sector.  And there are thousands of you (if this blog post is average, that is).  So I can't know.

This visualization of enrollment data measures only one thing: Enrollment.  But it measures several different types of enrollment (full-time, part-time, graduate, and undergraduate, in combination) at many different types of institutions (doctoral, baccalaureate, public, private, etc.)  And the best thing is that you can make it yours with a few clicks.

The top chart shows total headcount, and the bottom shows percentage change since the first year selected.  If you want to change the years, or change the types of enrollment, or the universe of the colleges selected, use the gray boxes at the right.  At any time, use the lavender box at top right to change the breakouts of the charts: To color by region, or grad/undergrad, or any other variable listed.

There are lots of interesting trends here, some of which will help you realize that while enrollment may be declining, it's not declining everywhere, or for every type of institution.

See something interesting? Post in the comments below.


Monday, December 10, 2018

Medical School Admissions Data

This is pretty interesting, I think, mostly for the patterns you don't see.

This is data on medical school admission in the US; some of it is compiled for a single year, and some for two years (which is OK because this data appears to be pretty stable over time.)

Tab 1 is not interactive, but does show applications, admits, and admit data on grids defined by GPA and MCAT scores.  Darker colors show higher numbers (that is, more counts, or higher admit rates.)  While we cannot get a sense of all takers like we do with other standardized tests, this does perhaps show some strong correlation between college GPA and MCAT scores (of course, another explanation may be that students self-select out, which then makes me wonder about that one student with less than a 2.0 GPA and less than a 486 Total MCAT score who applied, was admitted, and then enrolled.

The second and third tabs show applicants by undergraduate major, and ethnicity, respectively.  Choose a value at upper right (Total MCAT, or Science GPA, or Total GPA, for instance), and then compare that value for all applicants and all enrolling students on the bars; gold is applicants, and purple is enrollers.  The label only shows the value for the longer bar; hover on the other for details.

I was frankly surprised by some of these results.  How about you?


Thursday, December 6, 2018

2017 Admissions Data: First Look

IPEDS just released Fall, 2017 Admissions data, and I downloaded it and took a quick look at it. If you've been here before, most of this should be self-explanatory.

Three tabs, here: The first is to take a look at a single institution.  Use the control at top to select the college or university you're looking for. (Hint, type a few letters of the name to make scrolling quicker).

The second tab allows you to compare ten (you can do more, but it gets messy).  I started with the ten most people want to see, but you can delete them by scrolling to their check in the drop down and deleting them, and clicking apply.  Add institutions by checking the box by their name.

The final shows the relationships between test scores and Pell, which I've done before, but I never get tired of. Choose SAT or ACT calculated means for the x-axis, then limit by region and/or control if you so desire.

Notes:

1) Some of the admissions data for 2017 is tentative, so anomalies are probably in error.
2) Test-optional colleges are not allowed to report test data
3) Financial aid data is for 2016, as the 2017 data is not yet available.  It tends not to change dramatically from one year to the next, however.


Friday, November 30, 2018

The Death of History?

The last several days have seen a couple of articles about the decline of history majors in America.  How big is the problem?  And is it isolated, or across the proverbial board?

This will let you see the macro trend, and drill down all the way to a single institution, if you'd like.

The four charts, clockwise from top left are: Raw numbers of bachelor's degrees awarded from 2011-2016 (AY); percentage of total (which only makes sense when you color the bars) to show the origins of those degrees; percentage change since the first year selected; and numeric change since the first year selected.

You can color the bars by anything in the top box at right (the blue one) or just leave totals; and you can filter the results to any region, or group of years, or major group (for instance, history, or physical sciences), or even any specific institution.  And of course you can combine filters to look at Business majors in the Southeast, if you wish.

That's it.  Pretty simple.  Let me know what looks interesting here.


Wednesday, November 28, 2018

Your daily dose of "No Kidding"

As a young admissions officer in 1985, I went to my first professional conference, AACRAO, in Cincinnati. I don't remember much about it, but one session is still clear to me. I had chosen a session almost by accident, probably, because it was admissions focused in a conference that was mostly registrars. And fate stepped in.

There was a last minute substitution, and Fred Hargadon filled in for some person whose name is lost to history. At the time, I didn't think I'd stay in admissions long; my personality type is atypical for the profession, and I didn't find a lot to excite me.  But in this session I found someone who could approach the profession, well, professionally; someone who could view admissions in a much larger context than I was used to seeing.  Someone who was more intellectual and conceptual than friendly (although he was both).

I remember a lot of that session, but one thing has stuck with me through all this time.  He said, "In all my years in this profession, I've learned only two things: First, that the block on which  you were born determines where you'll end up in life more than any other factor; and second, if we had to choose the absolute worst time to put someone through the college admissions process, it would be age 17."

It was that first part that hit me.  It still does.  And here is some data that suggests things beyond your control still determine where you end up.  It's from the NCES Digest of Education Statistics, and shows what happened to students who were sophomores in high school in 2002 ten years later.

This is a pretty easy visualization to work with: The bottom bar chart shows the outcomes of the total group.  Then, using the filter at the top right, you can break out the top display by one of several values: Ethnicity (the default), gender, high school GPA, high school type, parental education, parental socioeconomic status, and the student's self-reported aspiration.  You can then see what percentage of each group has attained degrees, some education, or nothing beyond high school.  And of course, you can compare that breakout group to the total.

Use the "Highlight Outcome" function to make any particular level of education stand out.

Of course, the relationships between and among these variables are pretty clear, but the data are still telling: If you're white or Asian, if you're a female, if you were a good student in high school, if you went to a private high school, if your parents went to college, if you parents were wealthier, and if you aspired to a degree, guess what? You were more likely to get a degree.

And of course, while some of these things are a function of birth, others, like your high school GPA and your apsirations, may be heavily influenced by educated, wealthy parents.

Play around a little bit, and if you are able to find one thing on this that surprises you, let me know.


Thursday, November 1, 2018

2018 AP Scores by State and Ethnicity

The College Board data on AP scores is now available for 2018, but it's hard to make sense of in a macro sense.  The data are in 51 different workbooks, and, depending on how you want to slice and dice the data, as many as eight worksheets per workbook.  What's more, is the data structure; they're designed to print on paper, for those who want to dive into one little piece of the big picture at a time.

So before going any farther, I'd like us all to challenge the College Board and ACT to put out their data in formats that make exploring data easier for everyone. Unless, of course, they really don't want to do that.

I downloaded all 51 workbooks and extracted the actual data using EasyMorph, then pulled it into Tableau for visualization and publication. There are four views here.

The first tab is a simple scattergram, which may be enough: The relationship between a state's median income and the average AP exam score.  While blunt, it points out once again that we as a nation reward achievement in admissions (rather than merit) and that achievement is easier when you have more resources.  Filter by ethnicity or specific exam, and use the highlighters to show a state or region.

Tab two is a map, with average scores color coded.  Again, you see higher scores (orange and brown) in places where parental attainment and income are higher.  Again, two filters for you to drill down.

Tab three shows differences for any group by grade level and gender. It might be surprising to find that 11th graders generally score higher than 12th graders, until you realize that accomplished, driven children of successful parents load up on AP courses early to help with college applications.  But, given that girls have higher grades in high school than boys, you might also be surprised by the higher scores boys usually post in AP.  By the way, the young women go on to earn higher grades in college too, so wonder about that for a while.

The fourth tab shows score distributions two ways: On the left, with scores of 4 and 5 to the right, assuming 4 is generally the cutoff for college credit; since some of the groups are small (like Italian, for instance), I also put a stacked 100% bar on the right.  The Exam Groups filter at upper right clusters the tests by type (Science, Languages, etc.).

We all know that it is a good thing for students to work hard and challenge themselves in high school, but we also know--ceteris paribus--schools with more resources help prepare students for these exams better. As you look through these visualizations, I recommend you look at groups most underserved in our country, and ask whether the promise of AP has been delivered yet.

This data set is complicated as would need some explanation to manipulate, but I'll make the restructured version available to anyone in higher ed who wants it, via email to jon.boeckenstedt@depaul.edu




Monday, October 1, 2018

Story Telling With Data Challenge

I've often seen the challenges issued by Cole Knaflic on the Story Telling With Data website, and found the most recent one, creation of a scatterplot, to be too tempting to pass up. I used Tableau to create it, and yes, I've written about this before.

This is IPEDS data, from Fall of 2015 (the most recent complete set available).  It shows the strong correlation between standardized test scores and income.  And I think it shows something else, too.

On the x-axis, choose SAT or ACT scores (depending on your comfort) to see how higher scores translate into fewer lower-income students (as measured by eligibility for Pell Grants).  The bubbles are color-coded by control, and sized by selectivity (that is, the percentage of freshman applications accepted.)  Highly selective institutions are coded as larger bubbles, and less selective as smaller bubbles.

Note the cluster of private, highly selective institutions at the lower right: Most of these constitutions are among the nation's wealthiest, yet they enroll the lowest percentages of low-income students.  And, at the same time, they deny admission to the greatest numbers of students.  I presume they had many low-income students among those who were not offered admission.

Causality is complex, of course, and tests measure and vary with social capital, opportunity, and student investment as well as income and ethnicity. But this is one of those instances where a single picture tells the whole story, I think.  What about you?


Thursday, August 30, 2018

An Interactive Retention Visualization

As I've written before, I think graduation rates are mostly an input, rather than an output.  The quality of the freshman class (as measured by a single blunt variable, average test scores) predicts with pretty high certainty where your graduation rate will end up. 

(Note: Remember, the reason test optional admissions practices work is that test scores and GPA are strongly correlated.  If you didn't have a high school transcript, you could use test scores by themselves, but they would not be as good; sort of like using a screwdriver as a chisel.  And the reason why mean test scores work in this instance is essentially the same reason your stock portfolio should have 25 stocks in it to reduce non-systematic risk.)

Further, choosing students with high standardized test scores means you're likely to have taken very few risks in the admissions process, as high scores signal wealth, more accumulated educational opportunity, and college-educated parents. That essentially guarantees high grad rates.

But you can see the data for yourself, below. How to interact:

Each dot is a college, colored by control: Blue for private, orange for public. Use the filter at right to choose either one, or both.

The six-year graduation rate is on the y-axis, and mean test scores of the Fall, 2016 freshman class are along the x-axis.  Using the control at top right, you can choose SAT or ACT.  Test-optional colleges are not allowed to report scores to IPEDS.

If you want to find a college among the 1,100 or so shown, type part of the name in the "Highlight" box.  Then select from the options given.  You should be able to find it.

Sound good? There is more.

Try using the "Selectivity" filter to look at groups of colleges by selectivity.  Notice the shape of the regression lines, and how they're largely the same for each group.

Finally, if you click on an individual college, you'll find that two new charts pop up at bottom.  One shows the ethnic breakdown of the undergraduate student body; one shows all the graduation rates IPEDS collects. If you click often enough, you'll see patterns here, too. Race signals a lot, including wealth and parental attainment, as those--again--turn into graduation rates.

A final note: I've added a variable called "Chance of Four-year Graduation" which is explained here.  The premise is that everyone thinks they're going to graduate from the college they enter, so of those who do graduate, what percentage do it in four?

Tell me what you find interesting here.




Tuesday, July 17, 2018

All the 2015 Freshman Full-pays

There is no problem so great that it can't be solved by enrolling more full-pay students, it seems.  And in the minds of some, there is no solution so frequently tossed out there.  I've heard several presidents say, "We're doing this to attract more full-pay students."

Before we dive too deeply into this, a definition: A "Full-pay" student is not one who receives no aid; rather it's one who receives no institutional aid. Often these overlap considerably, but a student who receives a full Pell and/or state grant, and then takes out a PLUS loan is a full-pay; all the revenue to the college comes in cash, from another source, rather than its own financial aid funds.  The source of that cash matters not to the people who collect the tuition.  Got it?

This is a fairly deep dive into the IPEDS 2015 Fall Freshman data (there is 2016 admissions data, but financial aid data is only available for 2015-2016, so I used that admissions data to line things up.)  It's safe to say that things may have gotten slightly worse for most colleges since then, but there may be places where it's gotten better.  Discount at public institutions is less meaningful, so I've only included about 900 four-year, private, not-for-profit institutions from Doctoral, Masters, and Baccalaureate institutions with good data.

Eight views here: The first four are overviews, the next three are details within the larger context, and the final view is single institutions.  Colleges are banded into groups by selectivity in Fall, 2015, with more selective on the left, moving to the right.  Those groups are labeled "Under 15%," meaning the admit rate was under 15% in 2015; !5% to 30%, etc.  Open Admission at the right simply means the college generally admits all applicants, and is not required to report admissions data to IPEDS,

Ready? Use the tabs across the top to navigate.

1) Institutions and Full Pays: Looking colleges by selectivity, what percentage of institutions fall into each group, and what percentage of full-pay students attend.  The orange line shows that 2.45% of colleges are in the most selective group, but 14.43% of full-pays (purple line) enroll there.  Sums accumulate to the right.

2) Enrollments and Full Pay: Similar data, except now the red line shows what percentage of freshman overall are enrolled in these institutions.  For instance, 5.27% of all freshmen, but 14.43% of all full-pay students, enroll in the under 15% group.  This also shows running percentages, so by the time you get to all colleges up to and including 45% to 60%, the numbers are 73% and 81%.

 3) Freshman and Full-Pay Percentages: These are discreet.  The teal colored bar, for instance, shows only students in that category (135,381 freshmen) and the percentage of students in that group who are full-pay (4.9%).

4) Full-pay Destinations: Where do full-pay students enroll?  This shows by region and selectivity, and you can filter to a single state if you'd like.  It just shows Fall, 2015 raw numbers.

5) 6) and 7) are similar charts, with the only difference being the value displayed.  In these three, dots represent a single institution, colored by region.  They're grouped by selectivity (left to right position), and then the vertical position shows the value.  Full-pays shows the percentage of full-pays in the 2015 freshman class. Discount shows discount rate (the sum of institutional financial aid divided by the sum of tuition and fees).  Average net revenue shows just that, which is the actual cash a college generates per student.  Use the highlight function to show a single college or highlight a region for comparison.

And finally, 8) Single Institution allows you to see those three variables for one institutions at once. The are colored by region. You can sort by any column just by hovering over the axis and clicking the pop-up icon.  Sort descending by value, ascending by value, or alpha by name as you cycle through the clicks.

If your data are wrong, talk to your IR office.  If all data are wrong, drop me an email as I may have made a calculation error.  Otherwise, drop me a note and let me know what you think.


Wednesday, May 30, 2018

Measuring Internationalism in American Colleges

How International is a college?  And how do you measure it?  There are certainly a lot of ways to think about it: Location in an international city like New York, Chicago, or Los Angeles, for instance.  The extent to which the curriculum takes into account different perspectives and cultures, for another.

And, of course, there is some data, this time from the IIE Open Doors Project.  I did a simple calculation, taking the number of international students enrolled, plus the number of enrolled students studying abroad, and divided the sum of those to come up with an international index of sorts.

No, it's not precise, and yes, I know the two groups are not discreet, but this--like all the data on this blog--is designed to throw a little light on a question, not to answer it definitively.

You'll find data on all the colleges that participate in the IIE survey, displayed in four columns:  Total enrollment (on the left), International enrollment, Overseas study numbers, and the International Engagement Index, which is sort of the chance a randomly selected student will be either international or studied internationally in the last year.

The colleges are sorted by the first column, total enrollment.  You may want to see who has the most international students, or the highest International Index.  It's easy to sort these columns by hovering over the small icon near the axis label, as pictured below and indicated by the yellow arrow.  There is one for each column; give it a try, and if you get stuck, use the reset button.


As always, feel free to leave a comment below.



Thursday, May 10, 2018

Looking at Transfers

It's official: Princeton has broken its streak of not considering transfer students for admission, and has admitted 13 applicants for the Fall, 2018 term of the 1,429 who applied, for an astonishing how-low-can-you-go admit rate of 0.9%.  Of course, we'll have to wait until sometime in the future to see how many--if any--of them actually enroll.

I thought it might be interesting to take a look at transfers, so I did just that, using an IPEDS file I had on my desktop.  There are four views here, and they're pretty straightforward:

The first tab shows the number of transfers enrolled by institution in Fall, 2016 (left hand column) and the transfer ratio.  The ratio simply indicates how many new transfer students for Fall, 2016 you'd meet if you were to go on that college campus in Fall, 2016 and choose 100 students at random.  A higher number suggests a relatively more transfer friendly institution. You can choose any combination of region, control and broad Carnegie type using the filters at the top.

The second tab shows the same data arrayed on a scatter gram; type any part of a college name and then select it to see it highlighted on the chart.  Hover over a point for details.

The third chart is static, and shows undergraduate enrollment in Fall, 2016 and the number of new transfer students in the same term.  The bars are split by region and colored by Carnegie type.

And the last tab shows the weighted transfer ratios, split the same way.

As you'll see, thirteen students doesn't seem so significant against the 810,000 new transfers in Fall, 2016.  But it's a start.




Monday, May 7, 2018

Want to increase graduation rates? Enroll more students from wealthier families.

OK. Maybe the headline is misleading.  A bit.

I've written about this before: The interconnectedness of indicators for colleges success.  This is more of the same with fresher data to see if anything has changed. Spoiler alert: Not much.

What's new this time is the IPEDS publication of graduation rates for students who receive Pell and those who don't, along with overall graduation rates.  While the data are useful in aggregate to point out the trends, at the institutional level, they are not.

First, some points about the data:  I've included here colleges with at least 20 Pell-eligible freshmen in 2015, just to eliminate a lot of noise.  Colleges with small enrollments don't always have the IR staff to deliver the best data to IPEDS, and they make the reports a bit odd.  And even without these institutions, you see some issues.

Second, colleges that do not require tests for admission are not allowed to report tests in IPEDS.  Once you check "not required" that box with test scores gets grayed out, so attempting to report them is futile.

But, it's here.  View one shows pretty much every four-year public and private not-for-profit college in the US, and includes four points: On the left as dots are six-year grad rates for all students (light blue), Pell students, (dark blue) and all students (purple).  On the right is the gap between Pell grad rates and non-Pell students.  Again, some of these numbers are clearly wrong, or skewed by small numbers in spite of the exclusion noted above.

The next four collectively tell the story of wealth and access:


  • If you have more Pell students, your graduation rate is lower
  • While most colleges do a pretty good job of keeping Pell and non-Pell grad rates close, there are some troubling outliers
  • If you focus on increasing SAT scores in your freshman class, you'll pretty much assure yourself of enrolling fewer low-income students
  • But if you have higher mean freshman test scores, you'll see higher grad rates
In other words, test scores are income; income is fewer barriers to graduation.  And colleges are thus incentivized not to enroll more low-income students: It hurts important pseudo-measures of quality in the minds of the market: Mean test scores, and graduation rates.

If  you're interested on a much deeper dive on this with slightly older data, click here. Otherwise feel free to play with the visualization below.


Thursday, March 29, 2018

How have admit rates changed over time?

Parents, this one's for you.

Things are different today, or so everyone says.  If you want to see how admit rates have changed over time at any four colleges, this is your chance.  Just follow the instructions and take a look to compare how things have changed over four years.  The view starts with four similar midwestern liberal arts colleges, but you can compare any four of your choice.  (And before you ask, 2016 is the most recent data available in IPEDS).

And, a note: These changes are not all driven solely by demand.  Colleges can manipulate overall admit rates by taking a larger percentage of their class via early programs, and admit rates in those programs can be as much as 30 points higher than in regular decision.


Tuesday, March 13, 2018

Early Decision and Early Action Advantage

There is a lot of talk about admission rates, especially at the most competitive colleges and universities, and even more talk, it seems, about how much of an advantage students get by applying early, via Early Decision (ED, which is binding) or Early Action (EA, which is restrictive, but non-binding).

I license the Peterson's data set, and they break out admissions data by total, ED, and EA, and I did some calculations to create the visuals below.

Two important caveats: Some colleges clearly have people inputting the data who do not understand our terminology, who don't run data correctly, or who make a lot of typos (a -500% admission rate is probably desirable, but not possible, for instance).  Second, not every university with an EA or ED option (or any combination of them, including the different ED flavors), breaks out their data.

Start with the overall admit rate.  That's the one that gets published, and the one people think about. It's the fatter, light gray bar.  Then, the purple bar is the regular admit rate, that is, the calculated estimate of the admit rate for non-early applications (this is all applications minus all early types).  The light teal bar is the early admit rate: ED plans on the top chart, and EA plans on the bottom.  Some colleges have both, of course, but most show up only once.

You can use the filter at right to include colleges by their self-described level of admissions difficulty.

Working on another view to show the number of admits scooped up early vs. regular.  Stay tuned.  Until then, what do you notice here?  Leave a comment below.


Thursday, March 1, 2018

Tuition at State Flagships

The College Board publishes good and interesting data about college tuition, including a great table of tuition at state flagship universities. (I realized while writing this that I don't know how a university is designated a state flagship.  Maybe someone knows.)

There is some interesting stuff here, but I'll leave it for you to decide what jumps out at you: If you live in North Dakota, you might wonder why South Dakota has such low tuition for non-residents.  If you live just outside Virginia or Michigan, you might wonder why it costs so much to cross the border.

Anyway, using the tabs across the top, there are five views here:

Maps

Four maps, showing (clockwise from upper left) in-state tuition, out-of-state tuition, non-resident premium index (that is, how much extra a non-resident pays, normalized to that state's in-state tuition), and the non-resident premium in dollars.  Hover over a state for details.  You can change the year, and see the values in 2017 inflation-adjusted dollars, or nominal (non-adjusted) dollars.

States in Context

This arrays the states by tuition over time.  Use the highlight functions (go ahead, type in the box; you won't break anything) to focus on a region or a specific state. You can view resident or non-resident tuition, adjusted or non-adjusted.

Single Institution

Just what it says.  The view starts with The University of Michigan, but you can change it to any state flagship using the control at top right. Percentage increase is best viewed in 2017 adjusted dollars, of course.

Percentage Change

Shows change of in-state tuition by institution over time.  The ending value is calculated as a percentage change between the first and last years selected, so use the controls to limit the years.  Again, highlight functions put your institution in context

Non-resident Premium 

This shows how much extra non-residents pay, and trends over time.  Again, highlighter is your best friend.

Feel free to share this, of course, especially with people who are running for office in your state.

And, as always, let me know what you think.