Wednesday, July 1, 2015

Tuition Transparency Ratings

The Federal Government released its Tuition Transparency Ratings today, to help students and parents find out how fast colleges are raising tuition and net price.  And as is the case with many well-meaning government programs, the data doesn't always tell you the whole story.

The top chart on this visualization show tuition and fees at about 6,000 colleges and universities; the light blue bar is 2011, and the orange square is 2013.  To the right is the two-year percentage increase.  If you want to limit your selections or sort the colleges differently, take a look at this image, which I've embellished with some instructions.  Click to view larger.


The second chart, at the bottom, shows net price for 2010 and 2011.  Net price is calculated after grant aid, which is only reported at the end of the year, which explains the delay.  It's pretty much the same: 2010 on the aqua bar, 2012 on the red dot, and percent change in the purple circle.  The filters and sorts work the same way on this one.

There are a couple of problems here: One is the data.  I could not find a single program on the New England Culinary Institute website that listed a tuition of $88,000, but that's the data shown here. There are several instances like that in this data; even if they are technically accurate because of the way a program is configured, it doesn't advance our understanding of the issue much.

But more important, net cost is a function of who enrolls and how much aid you can give: If you suddenly stopped enrolling middle-income students, or you have small enrollments, the results can be very volatile. Net cost is a remnant, not a target that can be tightly controlled.  And, it seems in many instances net cost is being calculated by different people in different ways over the two-year period.

Still, there is some good stuff here, I think.  Take a look and let me know.




Tuesday, June 23, 2015

Looking at Medical School Admissions

Most of the things I look at have to do with publicly available data sets, and that often means undergraduate admissions.  But while doing some investigation, I came across data from the American Association of Medical Colleges.  There's some interesting stuff there, and while it's formatted in a way that makes it really difficult to get to, it's worth a little work.  (I'm not convinced that the formatting isn't an attempt to keep less stubborn people from digging too deep on this; my request to get the data in a better format was ignored.)

Best thing I learned: In 2014, of the 49,480 applicants to medical school, 41.1%, or 20,343, enrolled. That's a far higher percentage than I would have thought, although it is lower than the 2003 rate of 47.5% (34,791 and 16,541, respectively.)  It's clear, of course, that most medical school applicants are very well qualified, so that number represents the best of the best, but the perception of medical school selectivity is driven by the rates at each individual institution (sometimes 5% or less); in fact, each student applies, on average, to about 15 medical colleges, which skew the numbers.  These numbers are just for M.D. admissions, not D.O. or other medical professions.

This visualization has seven views, and starts with an intro.  You can get to the other six by clicking the tabs across the top:


  • A scatter, showing each medical college, colored by region, on two scales: Total applications and the number of applications per seat
  • Historical data for MCAT and GPA performance for applicants and matriculants over time
  • Applications, by ethnicity.  These are in a heat map format; the orange squares represent the highest vales on that individual grid
  • Admit rates, by ethnicity.  This represents (I'm 99% sure) the chance that a student in the category show, represented by the intersection of column and row, was admitted to at least one of the schools she applied to
  • Applications per seat in the entering class, broken out by male, female, and in-state status
  • Matriculant diversity, shown as male/female and in-state/out-of-state
By the way, if you need some understanding of MCAT scores, you can see them by clicking here.

If you're like me, you have a lot of questions that are not answered by the data AAMC provides.  But it's still a good start.  What do you notice here?





Monday, June 8, 2015

Diversity of Institutions, by Type

A few posts ago, I wrote about where students of certain ethnicities went to college.  In other words, if you looked at all the Hispanic students in the US, we'd want to see where they go to college, and compare that to Asian students, or students of two or more races.  I asked whether a student's ethnicity determined where they go to college.

This is the same data, but it examines it at the other end: The colleges, and how diverse they are.  In other words, does your location, control, and size, and Carnegie Type, for instance, determine how diverse you are, or limit how diverse you can become?

Again, the answer is no, but you can find some interesting trends.

If you're timid about using Tableau and interacting with it, here's your chance.


  • First, choose an Ethnicity in the top left corner.  For instance, assume you want to display the percentage of enrollment that is Asian.
  • Then, choose what value you want to display along the y-axis (the left side, from top to bottom)
  • Choose how to display the x-axis using two controls.  If you want just one dimension along the x-axis, make it the same variable for both x-axis controls.
Using the default values, look at the top right box.  This means that at the two private-for-profit Doctoral/Research Institutions in the Western States, the undergraduate enrollment is 81.6% non-white.  Hover over the box for details.


Now, click on that box, and the bar charts at the bottom update to show you those three schools, and the percentage of the student body of the ethnicity indicated.


As always, if you get stuck, just use the undo or reset buttons at the lower left:


There is a LOT to play with here.  What do you notice?





Monday, June 1, 2015

Enrollment at Women's Colleges, 2005 to 2013

Note: I got an email from Dean Kilgore at Mount Saint Mary's in California, who indicated I'd downloaded data for the wrong Mount Saint Mary College:In this case, the one in New York. I had to create the list manually, and it was just a slip on my part.

Sorry about that. I've removed them from the analysis, but unfortunately, can't add the correct one at this time without a considerable amount of work.



Sweet Briar College in Virginia recently announced, to the shock of many in higher education, that it would be closing at the end of this spring, 2015 term.  As often happens when a college decides to close, those who are or were close to it rally the troops and wage a fierce campaign to try to keep it open.  Sometimes it works, other times, it doesn't.

The scene playing out is not unusual: Allegations of secret deals, incompetence, blindness to all that is and was good at Sweet Briar. This is what happens when you decide to close a college.  And although I'm not taking sides, I did write before that the closing does seem to be curious in light of what little publicly available financial data there is: If you had to pick a college from this list that was going to close, it probably wouldn't be Sweet Briar.  Even the federal rankings of financial responsibility gave Sweet Briar a 3, a score higher than Harvard, which may only point out how absurd those ratings are in the first place.

A while ago, I downloaded a pretty extensive data set, using the members of the Women's College Coalition as my base.  Not all colleges have data available in IPEDS, however, so I did the best I could (for instance, the Women's College at Rutgers is not in IPEDS as a separate institution, or if it is, I couldn't find it.  And I took out Saint Mary of the Woods, as they just announced they're going co-ed).  Also, since there is no IPEDS data field that tells you when a college is a women's college, I couldn't go back and find out how many were labeled as such 20 years ago.  That might have been interesting.

Overall, though, the data were pretty uninteresting.  So I gave up on visualizing it.  There were trends, of course, but nothing dramatic.

So, when I saw this article, by one of the people leading the charge on the Save Sweet Briar campaign, one sentence jumped out at me:

Enrollment: There is no evidence that enrollment is declining, either at Sweet Briar or at women’s or liberal arts colleges. This claim is simply false. Numbers people, please check for yourself: The data are publicly available.

The data are available, and the link goes to the IPEDS site I use all the time.  So, take a look here. There are five views of the data, using the tabs across the top.  The first shows changes in freshman, total, undergraduate, and graduate enrollment over time.  The changes on the right are shown in relation to the prior year.  The second shows the same data, but the change is cumulative since 2005: As you can see, total undergraduate enrollment is down almost 6% during a time enrollment increased nationally.  The third shows admissions activity; the fourth breaks it out, showing Sweet Briar and all the other women's colleges in aggregate.  And the fifth shows total undergraduate enrollment in 2005 and 2013 (on the left) and change (on the right.)  As you can see, there are some big winners, big losers and a lot of small changes.

Decide for yourself.  And tell me what you see:





Wednesday, May 27, 2015

Does Ethnicity Determine Where You Go to College?

The answer to the headline, of course, is "no."  Race is not determinant of where you go to college, but race--or more probably the factors that vary with race and ethnicity--may influence your college choice set, which can, of course, influence where you go to college.

I've written before about how all these variables are at play with each other: In America, race, income, parental attainment, and presumably, opportunity, all cluster together.

And, after you look at this, you'll see how opportunity gets distributed by race, provided you're willing to click a button or two.  The visualization starts off showing all undergraduate enrollment in almost 7,000 post-secondary institutions who report to IPEDS.  (And before you object, I've believe strongly that the college you attend is not your destiny, and that education is what you make of it, as I've written before on my other blog. But it's also clear that many people believe talent congregates at the "best" colleges, and the way this plays out in hiring and graduation school admissions can be troublesome.)

As you can see, almost 75% of all undergraduates go to a public institution; the majority of them go to Associate's granting institutions (almost all community colleges.)

But use the control at the top right to see how the distribution changes: Try Black or Hispanic or Asian or White to watch the bars move.  What you see is a change: Hispanic and African-American students go to community colleges and for-profits at much higher rates than their White and Asian peers.  White students go to private, not-for-profits at higher rates than almost any group, except Non-residents.

International (Non-resident, here) students flock to research universities, but are also far more likely than any group to attend a private university.  This is because they avoid for-profits in great numbers.

What else do you notice?

If you want to limit the population, feel free to use any of the filters, but beware that the percent of totals are taken on the base of the sub-population you've chosen, not the entire population.  You'll notice there are further differences by geography and campus urbanization among other things. I'd love to hear what you turn up that's intriguing.

Friday, May 22, 2015

Endowments Over Time

Is it true that the rich get richer, as suggested by a recent Washington Post article discussing college endowments?  Probably.

But what's more interesting is the pecking order in the world of endowments, and how it's changed (or not) over the last decade or so.  I downloaded trend data from IPEDS showing beginning of year endowment by fiscal year for about 160 four-year private institutions with enrollments of at least 5,000.  To be sure, there are some very well endowed colleges who have smaller enrollments (Williams, Pomona, Grinnell, etc.) but this chart could get messy pretty fast.

I pulled the data into Tableau and created a bump chart, showing not the value over time (because everyone does that) but rather the rank of all the endowments shown.  So, for instance, if a line slopes down, it might mean a college has gone from 10th to 20; a slope up might show the opposite.

But what we see here are lines that are essentially flat.  There are a couple anomalies, of course, and as with IPEDS always, a warning about human errors in data entry, incompetence, or other agendas at work is in order.  When data was missing for a year, I simply put the prior year's data in its place for continuity.

I started with 30, but you can show more or fewer using the slider at the top.  Just pull it to whatever number of colleges to display.

See anything interesting?



Wednesday, May 20, 2015

A Look at Federal Loans for Students

We hear all the time about the student loan bubble.  Is it a problem?  Like most things, it depends on how you look at it.  Here is data viewed from a very high level, without the benefit of being able to drill down.

It shows the status of the government's student loan portfolio, broken out by the older FFEL programs and the currently existing Direct Loan Program; the FFEL loans are older, while the Direct Loans are a mixture of old and new.

Some interesting trends appear, even over relatively short periods of time.  What do you see?

And, if we had access to better, more granular data, what would you like to see?



Tuesday, May 12, 2015

Yes, Your Yield Rate is Falling

A recent article in the Chronicle of Higher Education pointed out the things colleges are doing to bolster their yield rates.  This of course, raised an interesting question among many outside of higher education: What's a yield rate?

Colleges admit many more students than they want to enroll, of course.  But let's say you want to enroll a class of 1,000.  How many, exactly, do you need to admit?  Most of the students you admit will have more offers than the ones you send, and they can only enroll in one place. So, if you admit 2,000, you need exactly half of them to enroll, which would mean a yield rate of 50%.  If you're not confident you can get that kind of yield, you admit more: 2,500 with a 40% yield rate gets you that same number (40% of 2,500).  But with most institutions, yield rates are closer to 30%, so that means 3,333.  Or thereabouts.

Alas, many colleges are afraid of that admit rate (the percentage of applicants admitted) getting too high, because for many parents and students, a low admission rate is a proxy for quality: An admit rate of 15% means (to some) that an institution is better than one with an admit rate of 30%.  And so on.  Part of what they do is generate "softer" applications, via a variety of methods I've talked about many times, ad nauseam. But the problem is that you don't know precisely who a soft app is, so you can't just take the same number of students, because the soft apps (with lower propensity to yield) will bring down your yield rate.  So admit rates fall, but ultimately, so do yield rates.

Managing and reviewing more applications is expensive, and if you care greatly about that admit rate, you try to keep it as low as possible and still make your class, by raising the yield.  Looking at demonstrated interest is one way; using financial aid more strategically is another; and finally, good old fashioned tactical approaches are another still.  Many places use all three.

Here is what our wheel spinning and tail chasing has spawned: Thirteen years of increasing applications, increasing admit rates, and decreasing yield and draw rates (draw is a better measure of market position vis-a-vis competitors because it punishes you if you try to appear more selective at the price of yield).

The first view here shows colleges in groups, starting with all 1,432 public and private not-for-profit, four-year, degree granting colleges in the US that admit freshmen in the traditional Carnegie classifications (Baccalaureate, Master's and Doctoral, excluding Baccalaureate-Associates colleges). You can use the filters to look at any combination of variables you'd like to see how things have changed.

The second view (using the tab across the top) allows you to use the filter to select any single college. And, if you're like most people, the first ones you select will be the big names, who trends appear to move in the opposite direction of the industry as a whole.  Which means, that for all those institutions trying to look like they're in the RBL (REALLY Big Leagues), all your effort has put you farther behind.

What do you see? Leave a comment below.


Friday, April 17, 2015

State to State Migration of Freshman

Previously, I did an analysis of colleges, showing which states freshmen came from in 2012.  It was very popular with people who are interested in the topic of geographic diversity.

But I heard another admissions person say last week (while I was at Missouri ACAC) that her college wants to enroll more students from outside the state.  It's not hard to figure out why: Students who cross borders (or who travel farther to college) are generally wealthier and have parents who are college-educated themselves, both of which make these students attractive targets.  Since the number of these students who are likely to migrate is essentially fixed in any given year, the intent to recruit more of them is really an effort to get a bigger piece of the pie.

But some states have naturally stronger pulls; others export more because of a relative lack of opportunity based on a smaller selection of colleges.  Where do students in your state come from?  Is your state enrolling a lot of students from out of state, or do you send a lot away?

Here is some 2012 IPEDS data for you to look at.

Note:


  • There are two views of the data (note the tabs at the top). 
  • On each of these two views, the columns--that is, the states listed across the top--represent the state where the colleges are located.  
  • The rows represent the home states of the freshmen (this includes only domestic students, not international students on visas).  
  • The box is colored by the percentage of students in the college's state that come from the state on the row.


For instance, hovering over this box on the first view, you'll see that 24.6% of students at colleges in Alabama came from Georgia:


The first (orange and gray) view shows only non-resident students; the second view (with the purple boxes) shows all freshmen.  On that one, notice that Iowas enrolls 20% of all freshmen from Illinois. Knock it off, Iowa. Note the diagonal band where each state intersects with itself.  And note the range in color on that diagonal.

How attractive is your state, and to which ones? It's an important question to ask before you assume you can swoop into a state and get more students.




Friday, April 3, 2015

Using an Ecologists' Measure of Diversity in Higher Education

Diversity is a topic a lot of us in higher education think about and write about and work towards, and yet, we don't really have a common definition of what it means. At its most basic level, we simply talk about the percentage of our students who are non-white. And, of course, if you compare colleges today to those in the 1950's, this makes perfect sense, and allows us to give ourselves all a pat on the back.

But the success of Asian students over the past few decades has complicated this: While they are not white, their large numbers at the nation's most selective institutions, and performance on college admissions examinations, makes us occasionally shift the discussion to under-represented students of color, which today might include Native American and Alaska Natives, Latino or Hispanic students, African-American students, Asians who are Hawaiian or Pacific Islander, and students of two or more races or ethnicity. This of course causes us to wonder whether a student of mixed Asian/Caucasian ethnicity should count, and to remember that technically, Hispanic is not a race. It's all very confusing.

On top of that, there are institutions who serve large numbers of under-represented students (HBCUs, for instance) that are not very diverse in the clinical sense: Almost everyone enrolled in those institutions are African-American. How do we think about decribing diversity that makes sense to everyone?

One way to do it is to use a measure called Simpson's Diversity Index. You can read about it here if you'd like, but it essentially says that once you come up with a category and count the population, you can calculate the likelihood that choosing any two members at random presents a mismatch of type. For instance, at a college in Puerto Rico, if you randomly select two students, the chances they are of different ethnicities is probably very small: You'll usually get two Hispanic students. Go to Howard University, and odds are you'll select two African-American students on your trials. This translates into a lower Simpson's number. If you have a university that is truly more diverse in the ecological sense, you'll see that number go up.  All the numbers in the index are between zero and one.

Of course, it's short-sighted to measure diversity just on race or ethnicity, but it's the thing we have the best data on. We can add other elements into the mix, but since the data are pre-aggregated, we cannot break the groups into subgroups (for instance, wealthy White students vs. poor White students.) This would yield better insight.

Look below. The first view shows all four-year, public and private not-for-profit colleges and universities in the US, and their Simpson's Diversity Index as calculated from total undergradute enrollment in 2013 Fall. On the first view, the bars are colored by freshman admissions rate, with an interesting theory suggesting that if your admit rate is low, you could be more diverse if you really wanted to be. In the tool tip that pops up when you hover over a bar (like in the screenshot right below), you'll see the breakdown of enrollment by ethnicity.


And if you hover over several bars in the same range, you'll see you can get to similar numbers in very different ways. So, even among diverse institutions, there are very different student body mixes in play.

On the second tab, you'll see some element of economic diversity added in: Pell Grant eligibility as a color. The chart is a scatter of Simpson's and Admission rates.

One note: I calculated the index two ways, using as the base number only those with known ethnicity, and then those whose ethnicity was not listed.  I think the first number is probably a better tool, but I did include it the second in the tool if you're interested.

Do you see anything interesting here? I'd love to hear it.



Wednesday, April 1, 2015

Sorry, Harvard, Princeton, Yale, and Stanford. You Lose

April 1, 2015

This week, all the hype about college admissions comes out.  Blah blah blah this college admitted only 7%.  Blah blah blah Oh Yeah? We admitted only 6%! We're better.

So, I says to myself, "Myself, I says, what is the real measure of the best college?"  And it became clear: The best college is the one everyone wants to copy!" And then I asked myself, "How do you do that?"  "Why," I replied, "by copying the name."

So here are the most common names of colleges in America.  As you can see, Columbia College and Bryan University are duking it out for the top spot.  If DeVry got their act together, they could win every year, as an informal analysis says there are about a bazillion of them.

So, sorry Duke (if that is your real name.)  Clearly, no one wants to be associated with the likes of you.


Tuesday, March 31, 2015

How Admissions Has Changed, in One Chart

I frequently hear that the interactive charts I publish are too confusing or time-consuming, and that it's hard to get the story out of them without some work. So today, I'm making it easier for you, for two reasons: First, this is real student data, not summaries: Each dot represents a student who applied for financial aid, so I'd never publish that data on the web; this is just a good, old-fashioned picture of a chart.  Second, in this case, one chart tells the whole story.

The population here is all freshman financial aid applicants who completed a FAFSA but did not have need.

Each column is one year, and each dot in that column represents a student; higher positions in the column show higher income, from zero to one million dollars in parental AGI (adjusted gross income).  This is arrayed in a box-and-whiskers, or box plot.  The yellow boxes show the limits of the middle 50% of the distribution (the "box") with the color break representing the median.  The top whisker (the black horizontal lines) represent the 75th percentile.  In other words, 25% of the applicants have incomes above that line.  The bottom whisker is the lowest 25th percent. Yes, there are people with very low incomes who do not qualify for need-based aid, usually due to large asset bases.

Note the way the black line rises over time, from about $430,000 in 2007 to almost $600,000 in the last two years.  There are several possible explanations for this, all of which are probably valid to some extent.

  • It's a buyer's market, and college recruitment activities have brought in people who are shopping in more places
  • People who never would have applied for aid in prior years are doing so, because the crisis of 2007 has evaporated many assets, like home equity, that people might have used to pay for college
  • Other colleges are requiring a FAFSA for merit aid consideration so we get the FAFSA as a residual.  No one, it seems, is opposed to trying to get a lower cost
  • Colleges are so afraid of losing someone due to price considerations they encourage everyone to "give it a shot" and see if they are eligible.
One note: In 2014 we had 31 applicants whose income was $1,000,000 or more who are not shown here, and who would have brought the distribution up. These people used to show up in prior years as $999,999 dollars, so I took them out for equal comparisons. And, in anticipation of the next bump, we did have one family who reported an AGI of $9,999,999 for 2014 when they completed the FAFSA.

This post shows Financial Aid data, but the title says it's about how admissions has changed. What do you think? How are the two related?




Tuesday, March 24, 2015

Application Fees

Ever since my first day in admissions, I've had a big problem with the concept of college application fees.  They just seem odd to me: You pay some amount of money for the privilege of being considered for admission, often not certain you'll attend if you are.  And if you're not admitted, you're out of luck.

I understand those who support the concept, in concept: Students shouldn't apply to a lot of colleges, and they should be somewhat serious about the colleges they apply to.  Except we know that doesn't happen.  The counselor at my kids' school said a few years ago one student applied to 46, and the Fast Apps, Snap Apps, and VIP apps all encourage students to apply to places just because they can.

I also realize that there are costs associated with processing applications, although those costs have dropped pretty dramatically in the past several years, especially when all the documents come in electronically. But all the costs of doing business are paid for by the students who pay tuition, and, presumably, more applications is good for the college they attend.

There may be other models where this system is used, but I'm not able to come up with any.  All I can think about is having to pay $50 just to walk onto the Toyota lot and shop for cars (which is hardly a perfect analogy, either.)

Too often in the discussion about things like "admit to deny," people will point out that app fees from students who have little chance of being admitted are a revenue source for colleges.  Technically yes, but actually no. At most institutions, it's about 1/10th of 1% of total revenue.

So, take a look at what colleges charge to apply.  This visualization starts with just under 2,000 four-year colleges and universities, each represented by a dot.  IPEDS apparently list the highest fee a college charges when there are multiple levels.

Hover over the dot for details.  The bar chart at the bottom shows the breakouts as a percent of total. Use the filters on the right to show a smaller set of colleges.

What do you notice?



Monday, March 23, 2015

Another Way of Looking at Graduation Rates

Another article appeared in my Facebook feed about college ROI, although it was called the 50 Best Private Colleges for Earning Your Degree on Time.   As is often the case, there is nothing really wrong with the facts of the article: You see a nice little table showing the 50 Colleges with the highest graduation rate.

But it got me to thinking: What if high graduation rate wasn't enough?  What if a considerable portion of your freshman class that graduates takes longer than four years to do so? Is that a good deal?  Let's take some hypotheticals:

College A: 1000 freshmen, 800 who graduate within four years, 900 who graduate in five, and 950 who graduate in six.  So the four-, five-, and six-year graduation rates are 80%, 90%, and 95%.  But of the 950 who eventually graduate, only 84.2% do so in four years.

College B: 1000 freshmen, 750 who graduate within four years, 775 who graduate in five, and 800 who graduate in six.  So the four-, five-, and six-year graduation rates are 75%, 77.5%, and 80%. Thus, of the 800 who eventually graduate, almost 94% do so in four years.

College C: 1000 freshmen, 550 who graduate within four years, 600 who graduate in five, and 625 who graduate in six.  So the four-, five-, and six-year graduation rates are 55%, 60%, and 62.5%. Of the 625 who eventually graduate, 88% do so in four years.

If you were choosing among these three colleges, which might you choose?  The easy money says you go with College A, the one with the highest graduation rate. College B would be your second choice, and C would be your third.  But what if you are absolutely, positively certain you'll graduate from the college you choose? College B is first, then College C, then College A.

Data can be tricky.  And as I've written many times, things like graduation rates are really almost inputs, not outputs: If you choose wealthy, well-educated students, you're going to have higher graduation rates.  It's a classic case of making a silk purse out of, well, silk.

I've tried to demonstrate this in this visualization, and I like the simplicity here.  Each dot is a college (hover over it for details).  They're in boxes based on the average freshman ACT score across the top, and the percentage of students with Pell along the side.  The dots are colored by four-year graduation rates, and you should see right away the pattern that emerges.  Red dots (top right) tend to be selective colleges with fewer poor students.

But if you want to look at the chance a graduate will finish in four years, use the filter at the bottom right.  Find a number you like, pull the left slider up to it, and see who remains.  (Just a note: I'm a little suspicious of any number of 100% on this scale, which would mean absolutely no students who graduate take longer than four years to do so.  It might be true, but it's hard to believe. But I'd set the right slider to 99% at the most.)  Remember, there's a lot of bad IPEDS data out there, so don't place any bar bets on what you see here.

What do you see?



Wednesday, March 4, 2015

Are we all doomed?

If you follow media following higher education, you know that for a while, many have been (somewhat gleefully) predicting the demise of the whole industry.  High costs, MOOCs, a weak job market, and shrinking confidence in the value of a college degree are all conspiring, they would say, to create a perfect storm that will be the end of us all.

I'm not saying these people are wrong;  you can get in trouble arguing with self-proclaimed prophets, and until something either comes to fruition or it doesn't, all you have is a lot of heated discussion. Personally, I take exception to the smugness of some who seem to revel in their predictions.

But that is, as they say, why they make chocolate and vanilla.

The heat (if not the light) increased this week when Sweet Briar College in Virginia announced it was closing.  The pundits came out of the woodwork, proclaiming that this was just the first domino to fall, all the time apparently reveling in this presumptive proof of their collective acumen in predicting such things.

But a look at publicly available data makes it hard to predict such things; many colleges soldier on despite numbers that make them look vulnerable, while a college like Sweet Briar, which occupied a pretty good position on the second chart, below, found itself a victim of the most obvious college problem, namely enrollment that was not large enough to support itself.

You might think that Sweet Briar is the first of many.  You could say the industry is collapsing.  And you might be right.

But it seems there is nothing a prophet likes to point at more than evidence he might be right.  There is no one saying (yet) some of the other possible reasons things might have gone south, even though there is much more attention paid to this college than the one per month that has closed since 1969. If it later turns out (and I have no reason to believe it will) that this was a board with no vision, or a horrible case of mismanagement, or one of dozens of other possible reasons we can point to, the pundits are unlikely to correct what they're suggesting today.

So take a look at this.  On the first chart, you can see the array of colleges and universities, and with a click of a bubble, find out who's where.  On the second, you can put any college in context with a couple of clicks.  Have fun. Don't get too worked up over what you see.  It's not destiny.

As for me, I'll tell you what Mark Twain once said: “I was gratified to be able to answer promptly, and I did. I said I didn’t know.”  And I'm sticking to it.

Note: It's important to remember that IPEDS data that this is built with contains errors on occasion; don't make any bar bets on what you see here, and if your institution is incorrectly listed, take it up with your IR office.



Friday, February 27, 2015

Ten Years of Endowment Data

While the endowment of a private university is not a big investment pot from which universities draw income to spend at their discretion (some portion of every endowment is restricted to certain use), it's a very good proxy for institutional wealth.  What's always been interesting is the enormous size of the top five or six institutions, always led by Harvard, in comparison to everyone else.  And yet Princeton, which enrolls fewer students, has the largest per-FTE endowment.

This visualization shows two things.  On the top chart, it's a tree map, or what I like to call a sheet cake map.  Think of all the money in all the endowments as one big bowl of batter baked into a cake, and then, once baked, sliced up into pieces.  The size of the piece is that institution's endowment as a part of the whole.

The bottom chart shows ten years of endowments, measured at the start of the fiscal year shown, so you can see the hits in 2008--2009 and the overall growth over time.  Of interest There are only three private universities in US who had a total endowment in 2012 equal to the ten-year growth of Harvard's.

If you click on an institution, the line chart at the bottom will filter to just that college over time.  If you hover over a line on the bottom chart, it will highlight the instituion on the top so you can see its place in the endowment universe.

What do you see?



Wednesday, February 18, 2015

Four years of Ivy League Tax Returns

I love the Internet.  Thirty years ago, I couldn't have imagined being able to look up several years of tax returns for the Ivy League Colleges and Universities (let alone being interested in them.)  But Guidestar (a great site you should check out, in case you don't know it) comes to the rescue.  The documents are pdf, unfortunately, but you learn a lot by inputting the data manually into a spreadsheet.

For your information: By law, all universities that receive Title IV funding must make tax returns available to the public, so there is nothing clandestine about this.

The tax returns can show you, albeit at a very high level, at how the Ivy League Institutions generate revenue, and how they spend it. To no one's surprise, salaries and benefits dominate at almost all colleges and universities, and if you're really curious, the returns list in detail how much the officers and highest paid non-officers make.

But as I once suggested, the most interesting thing is the massive investment return these institutions generate; even the "poorest" of them--Brown University--averaged about $124 million in investment return over these four years.  Collectively, the investment return of these eight institutions averaged over $550 million per year, for a grand total of $18 billion over the four years. To put that in some perspective, there are about 1,553 private, not-for-profit, four-year colleges and universities in America with revenue data in IPEDS; 1,506 of them had total revenues of less than $550 million in 2013.

Take a spin around this.  It's fairly interesting for the most part, and very interesting for one reason: Princeton's 2013 data (from the 2012 Tax Return, which I've put here in case you want to take a look.)  The return shows an operating deficit of almost $1.3 billion, driven by an investment loss of over $800 million. I asked an expert on university finance (not affiliated with my own institution) about this, and here is what he said:

We were doing some analysis using IPEDS finance info and it showed some really weird results, with Princeton being the strangest of all.  It caused me to pull their audited financial statements and examine them.  Here’s a link to the statements in case you’re curious.  Nothing weird showed up in the statements so I attributed the problem to IPEDS and the Department of Education.  Now having looked at the 990, I believe Princeton has suffered some turnover among its finance staff and the folks doing their reporting don’t know what they’re doing.  As you will see, the financial statements appear to be quite different from what was reported in the tax return.

So, take this, and everything you read from publicly available data, with a grain of salt.




Friday, February 13, 2015

The Race Goes On

Unless you live under a rock, you probably know that colleges are, in general, interested in increasing the number of students who apply for admission.  There are a couple reasons for this, but they're all mostly based on the way things used to be: That is, before colleges started trying to intentionally increase applications.  The good old days, some might say.

In general, increasing applications used to mean a) you could select better students, who would be easier to teach, and who might reflect well on your college, or b) you as an admissions director could sleep a little better, because you were more certain you could fill the class, or c) your admission rate would go down, which is generally considered a sign of prestige.  After all, the best colleges have low admission rates, right?

Well, yes, one does have to admit that the colleges that spring to mind when one says "excellent" all tend to have low admission rates.  Lots of people want to go there, and thus, it must be good.  The trained eye might be able to spot the forgery, but what about the average person?

This week, we have another journalistic treatise presumably exposing colleges for the ways in which they attempt to increase applications.  The tactics listed in this article are nothing new: Reduce the essay, waive the fee, encourage more low-income kids.  Barely mentioned was the "Fast App/VIP app/Priority App," many colleges use that allow them to count an "applicant" as anyone who clicks an email link that says "Start your application."

However, application increases only pay off when you generate them from students who have a reasonable propensity to enroll.  Prestige can be measured by a little-used variable that punishes you when you increase applications to try to look more selective at the cost of deceasing your yield: It's called the Draw Rate, and it's a powerful indicator of market position.  It's a simple calculation: Yield rate/admit rate.

Here's a secret: For some percent of the freshman class, let's say 33%, recruitment doesn't come into play at all.  A large chunk of your enrollment is natural; that is, those students are likely to enroll no matter what you do.  The next 33% are going to enroll presuming you do everything correctly, make it affordable, and help them understand how they fit. But the last group, that final third, comes from students who have little predisposition to enroll.  Your recruitment tactics focus on them, and you spend most of your time trying to find them, get them to apply, and then to enroll.  They may make up as much as 75% of your pool.  They enter your applicant pool with a lower level of interest.

The problem is that usually, a big increase in applications comes not from the first or second group, and not even the third, but rather a fourth group, the "Ain't no way I'm going to enroll short of a miracle" group.  The bigger problem is that you don't always know exactly who these students are. This is one of the reasons demonstrated interest has become a topic of discussion.

When you artificially increase applications, and you have to cover your ass by admitting more, your yield is going to drop.  And so will your draw rate.

So, let's look at the data.  These charts start out very busy, so you should interact by selecting just a region or Carnegie type.  But even at their busy mess stage, you can see: a) applications are up, b) admits are up, and c) yield rates are down at almost every type of institution, with the exception of the big, private, research universities.  The ones you can rattle off without thinking too much.

But look at the Draw Rates, on the last two charts.  Draw rates are down across the board, mostly because capacity is relatively constant, the supply of students is down, and competition is up.  The only winners in the battle to increase prestige? The ones who were prestigious in the first place.  The money spent trying to join that club, or sometimes even just to look more like them, could have been put to better use.

Use the boxes across the top to see the six points of this Tableau Story Points visualization.  Note that the last one exposes some data anomalies which are inherent in IPEDS, often due to typos or new IR staff who count the wrong thing (my alma mater in 2010-2011, for instance.)

What do you see? And what do you think?  Is the race for prestige dooming us? Or is it just the latest evolutionary stage in the natural process of competition?




Thursday, February 12, 2015

Degrees Awarded by State

Frankly, the data are a little boring when you first try to visualize them.  When you're looking at the number of degrees awarded by discipline and by state, California, Texas, and New York win pretty much everything.  That's no surprise, of course, as they're the largest states with the most college students.

So I broke it into regions, thinking there must be some differences in the degrees awarded in different parts of the country.  Nope.  The Middle Atlantic wins.  That's where the people are.

Finally, I looked at each state by the percentage of degrees in certain fields, and voila! Something interesting. Different states award different types of degrees in dramatically different proportions. Some of this can be answered easily: A high percentage of business and computer science degrees in Arizona is driven by the University of Phoenix, but others are not so obvious. Why is there such disparity when you look at humanities, engineering, or health professions?

To interact, just select the type of degree in the purple box at the top.  It starts with business, but you can choose anything.  The maps and the bar charts will update to show each state, and the percentage of bachelor's degrees in that state in the discipline selected.

Any explanations?


Thursday, February 5, 2015

When Infographics Fail

There are a lot of bad infographics floating around the Internet.  When they concern things like the difference between cats and dogs, or how many hot dogs and hamburgers Americans eat over the 4th of July, it's no big deal.

But this blog is about higher education data, and when I see bad infographics on that topic, I feel compelled to respond.  This one is so bad it's almost in the "I can't even," category.  It takes very interesting and compelling data--The graduation rates of Black male athletes--and compares it to overall graduation rates at several big football schools in the nation.  Here it is:


For starters, this chart appears to stack bars when they shouldn't be stacked: A graduation rate of 40% for one group and 40% for another group shouldn't add up to 80%.  The effect is that it distorts much of what your brain tries to figure out.  For instance, look at the overall rates (longer bars) for Georgia Tech and Pittsburgh;  Georgia Tech at 79% is shorter than Pittsburgh's at 77%, because they started at different points.

But wait, they can't be stacked; Louisville's 44% + 47% is way longer than Notre Dame's 81%. Stacked bars on dual axes?

These also look at first like they could be two sets of bars, with one (the overall graduation rate, which is always higher) behind the Black male graduation rate.  But that can't be, either.  The effect is that you look at Notre Dame and see very long gap between 81% and 96% (a 15-point spread) that appears to be longer than the 37-point spread at Virginia.

In short, I cannot tell you how this chart was made, or what the assumptions are, let alone what the story really is.

And the image behind the picture is even worse; it makes it hard to see.

Finally, a third element might have been interesting here: The graduation rate of Black males who are not athletes.  It might shed more light on the problem, although if the same designer did it, I'd not be confident.

Here's the data presented three ways, each of which tells the story differently, but each better in at least one way. This was literally 15 minutes of work.

What do you think?