Interesting Digital Shorts

There’s a new web series called H+ (pronounced H plus, not H positive if you’re in a medical frame of mind) that premiered earlier this month.  Normally, I’m not into digital shorts, but this one struck me out because it’s near-future, realistic science fiction.  My basic understanding of the series is that it’s about a computer virus that manages to wipe out a lot of the Internet.  The novelty of this idea is that the virus targets people with implants (the eponymous H+) that enable them to connect to the Internet, and it seems to kill them.  The series jumps around in time, looking at the moment when all the networks are attacked, years before as the technology develops (and becomes as ubiquitous as cell phones), and some time later as society tries to recover .

It’s got some big production values too, since Warner Brothers’  digital branch is producing it.  I’ve watched the first six episodes already (they’re all about 5 minutes long), and they’re all quite good, if a bit mysterious still.  But I’m very excited.  Check out the series trailer here.

Skeptical About Online STEM Education

The Chronicle of Higher Education has an opinion piece about online learning, which is  skeptical of it.  Not a surprise, considering the CHE is full of people from “brick-and-mortar” institutions.  And like the usual trend of these articles, it comes from a humanities professor.  First, I found myself in half agreement with a lot of the points Dr. Hieronymi makes.  But I’ll also point out that we can do some of those things with online classes.   Online instructors aren’t just Wikipedia articles that talk; most of them do work to help distill information and provide feedback to students.  But even then, there is something to be sad for having face-to-face interaction.  As someone who was a teaching assistant in an introductory programming class, I can say even “technical” skills can transfer more easily in person (sometimes it pays to see someone else do something on a computer right in front of you, instead of shuffling between windows and your program).

But I’m going to present a very different critique of online education, that I never hear anyone mention.  What on Earth happens to lab classes?  I have never heard anyone ever point out that science and engineering students actually do need to show up to some physical space to actually touch real equipment and do something with it.  Unless we’re going to drastically rewrite hazardous materials laws, chemistry majors aren’t about to start stockpiling hydrochloric acid in their houses and I’m not going to get to buy the neutron source I used for all my particle physics at Home Depot.

And we can’t just virtualize all these labs.  As long as science and engineering job requires people to physically manipulate materials and equipment to get appropriate settings and amounts, students will need to get used to dealing with the actual imperfections of stuff.  While it was incredibly cool to get to instantly change settings and watch blocks hit each other to show energy and momentum conversation on the lab software I used for my AP physics distance learning class in high school (or something like it), it was nothing at all like what I actually did in college.  In my freshman physics class in college, my partner and I had to actually calibrate equipment to get meaningful results.  Sometimes this was a painstaking process,  like spending an hour adjusting springs so the masses we were testing wouldn’t fly out of our bucket and change the momentum (and also be a safety hazard).  Other times it meant dealing with actual

The Department of Homeland Security gets suspicious if a kid starts buying too much material with this symbol

uncertainties, like wondering if a dent in a ball would throw off the rolling results.  And while it can be annoying, it’s also important that  STEM students understand the limits of what technology (and physics) will actually allow them to do.  Of course, this doesn’t mean I want to deprive students of technology that could make labs better.  It’s completely fine that we used a digital camera and a computer program to automate calculating the distance of discs on our air hockey table instead of spending 3 hours figuring it out by hand.  And it’s great that computer programs can automate lots of other data collection.  But students also need to be able to adjust equipment as conditions call for it, like adjusting a laser to observe different kind of samples or understanding that some microscopes disturb the material you’re observing.

To me, labs are the hardest thing to convert to distance learning unless we come up with some radically new way to allow students and other people in training access to materials and equipment.  I can see some ways for this to work , but it all seems complicated once you get to specifics.  In the long run, if this system works, it seems like it could probably equalize educational opportunities all over the country (and world), which I’m all for.  But it seems like in the short and medium term, we end up with a weird system as lab equipment stays in traditional places (and this could easily end up hurting regional education).  Let’s say that 30 or so years in the future, I’m living back in Kentucky and have two children.  For ease of pronoun reference, let’s say I have a daughter, Emma, and a son, Alan [note to self:  work on baby names].  My daughter is interested in cosmology.  You can study astrophysics (and that’s what most cosmologists have a degree in) at Kentucky schools, but there’s not many cosmologists on the faculties (to my knowledge), so Emma ends up applying to out-of-state and private schools that have more opportunities in her area of interest.  My son wants to be a pharmacist, and so wanting to save money for a Pharm.D (and maybe woo a pharmacy professor),  wants to stay in-state and go to the University of Kentucky’s brick and mortar program in chemical engineering (let’s assume state schools still exist).  Since Alan is a Kentucky resident, he should hear back fairly quickly from UK if his application is good, and we find out he gets in and he accepts.  Emma hears back later, and though she gets in to several brick and mortar programs, she doesn’t get admitted to MIT’s.  However, MIT does accept her into their online program which in the future can grant full bachelor’s degrees along with additional lab work.  Let’s say Emma decides to accept that to save money for grad school.

Here’s my question:  What does Emma do for all her lab courses in physics and astronomy?  Do we ship her off to Cambridge every year so she can complete two semesters worth of lab in a few weeks?  If so, then clearly MIT shouldn’t plan on razing all their dorms and facilities.  Or maybe MIT makes senior year be on campus, and Emma does nothing but run around doing labs and a senior thesis after finishing classes? Does Emma do the labs online?  I hope not.  Does she do physical labs somewhere in Kentucky, and MIT approves them for credit?  If so, MIT’s online degree is a lot less standardized than the traditional one if students routinely transfer all but a few lab credits from hundreds of other schools.  Does MIT arrange for her to do labs at a Kentucky school?  MIT or my family will probably need to reimburse the school for the lab fees.  What if these schools don’t have the same labs or equipment because they were designed for their own programs, which are structured differently from MIT’s?

You could say this example is incredibly specialized, and that’s true.  But that’s also the point.  In most of the discussions of online education, no one points out there are actual items on the curriculum that need to be done physically, and instead seem to only defend campuses for vague notions of “peer bonding” and “learning outside the classroom”.  While those things are important, I also find it equally reasonable to just point out there are things we do learn in classrooms that we can’t move online.   And this example still works on more common majors.  Looking up requirements  for chemical engineering at the University of Louisville, UIUC, UVA, and Rice University, I saw a few big differences in how their courses and labs are structured (see note below for details).

You could also argue that a university like MIT doesn’t need to give up it’s brick and mortar facilities because it’s so prestigious, people will always go there.  That’s probably also true.  But the flip side of the whole “online education solves the university bubble” solution is the implication that we just let weaker institutions fade out over time.  And this is where it seem like entire states could end up worse off.  The top 50 colleges and universities in the United States are not even close to being evenly spread around the country.  Hell, you can go up to the top 100 and still manage to miss a lot of states.

Map of campus ministries at top 100 US universities (as ranked by by US News). Source:

So what happens to the others, especially in poorer states with lower ranked universities?  Does Kentucky just quit funding physical facilities in it’s public system because none of our state schools ever made it big?  Probably not, since universities and their facilities are known to have a great positive impact on local economies.  And that’s part of the kicker.  Universities are more than just where people get education; they’re where lots of research that is vital to industry happens.  So unless we start separating these functions and until someone lets 20-year-olds buy nuclear materials for physics lab, I’ll still be rooting for the old-fashioned university campus.

Note on the different chemical engineering curriculums:  All the schools are ABET-accredited for the bachelor of science in chemical engineering.  But ABET accreditation covers an entire curriculum and just says we expect at least these minimums.  Schools can still play a lot with how they met those requirements.  UVA seems to make their ChemEs take more chemistry classes and more labs from the chemistry department.  Rice has more on hydraulic equipment, we think because of an older focus on oil.  And now Rice has their ChemEs do a lot of computer modeling.

When “good” conversation means different things

A new study suggests even “good communication” between doctors and patients may still result in confusion.  Parents of children at Johns Hopkins’ intensive care unit for newborns would routinely say that their children were only “somewhat sick” or “pretty healthy”, even when their baby was diagnosed with something life-threatening.  What was more notable was that parents said these things even after talking with a doctor in what both parties considered to be a good conversation.

Physicians have suggested some explanations.  One is that younger kids with severe problems may not outwardly appear to be sick, so parents without much medical knowledge may assume things are better than they hear.  Also, in defense of the parents studied, I’m not sure many people want to say “Oh, my newborn child might die” to a random researcher.  There’s a definite outward psychological component here, in my mind.  We encourage patients and families to be optimistic, so even if people are worried, they tend to avoid phrasing things in a matter of fact way if that would be incredibly negative.  The culture argument also seems important.  We’ve made a lot of progress in treating severely ill newborns, so premature babies that would once have been considered lost causes are now almost definitely going to get to go home with the parents after a few weeks of treatment and monitoring.

What struck me from the NBC article though, was the story of the author (who is a doctor) saying patients routinely tell her “After all of these years, no one’s ever explained that to me before.”  She points out, that, it just simply isn’t likely that no one ever explained any of these things to ALL her patients.  And realistically, she wonders how many of those patients who had epiphanies in her office will forget, and thank their next doctor for “finally explaining something”.

Although this problem is particularly prevalent and important in medicine, since doctors routinely interact with people without technical training and the communication can literally have life-changing impacts, I think it extends to a lot of science.  When I’ve tutored, I’ve come across a few students who say I’ve explained something better than anyone before.  While I also like to take it as a compliment, I’m confused on some occasions when I know my explanations aren’t that different from the instructor or textbook.  Sometimes I think it might be patience.  As a tutor, I can put more time into working with one person than a teacher in a class of 30 or a professor with a lecture to 50-100 students.  Dr. Gaines might be more effective than other doctors at explaining things because she might put more time in talking with the patient.

I also wonder if there’s something in context cues and repetition.  I’ve seen lots of students claim to only really “get” a concept after repeating it so often they understand how to apply it.  I wouldn’t be surprised if explanations seem to make more sense to patients when surrounded by doctors and having gotten a test or procedure done.  When you’re out of that environment where it made sense, and you don’t think it about it for a while, people are prone to forget.  I know I’ve forgotten a lot of stuff about electromagnetism and rotational motion since that wasn’t as important in my final year of undergraduate coursework.  It seems completely logical for a patient to slowly lose comprehension of an illness if it doesn’t bother them for long periods of time.

PS:  A blogging nitpick.  To my knowledge, Gaines’ post doesn’t link to the original study, so I can’t glean anything else from it or link it for you.  Unfortunately, this is a common practice in newspaper and television news websites.

If Only Billy Mays Were Still Around

There’s been a bit of a buzz in battery research lately as chemists have made great strides in truly powering life by the “air you breathe“.  What on earth does that mean aside from being a pointless reference to infomercials I’m obsessed with?  (Aside:  This is actually a problem, I once watched the full half-hour Magic Bullet infomercial because I was bored).  While my previous post talked about researchers redoing a battery idea of Edison’s, this team at the University of Southern California was tinkering with a more  recent design:  “breathing batteries”.  Breathing batteries are basically powered by the rusting of iron by oxygen, though it seems the “breathing” is a bit of a misnomer since the journal article mentions the chemical reactions occurring in liquid (although a lot of literature still uses the term “air”).

Iron rusting actually produces a lot of energy.  If you’ve ever had one of those disposable hand warmers, odds are it was mostly filled with just iron filings and a few other chemicals to speed up the reaction.  But all the heat is coming from the iron corroding REALLY fast.  Iron-air batteries have been around for decades and became very popular during the 1970s energy crisis.  But like the Edison batteries, they fell out of favor when other battery chemistries proved to be more efficient.  Aside from oxygen rusting the iron, there’s a second reaction in the battery that takes charging current and produces hydrogen, and this could take up to half of the energy.  They’ve come back into vogue for similar reasons to the iron-nickel batteries:  the materials are abundant (and cheaper) and safe for both people and the environment.  The Department of Energy hopes improving their efficiency could make for reasonable energy storage in a shift to a renewable energy power grid.

Fine iron particles in the USC battery. Everything looks pretty under electron microscopy.


So what made the USC batteries so much better than before?  Pepto-Bismol.  Seriously.  The active ingredient of Pepto-Bismol, bismuth sulfide, was added to the iron electrode. The bismuth prevented hydrogen formation, and reduced the energy loss to only 4%.  It also helped improve how much energy the battery could hold and how quickly the energy could be released, both of which are important factors for storing energy meant for the power grid.

Thomas Edison Strikes Again

Chemists at Stanford have helped bring a battery designed by Thomas Edison into the modern age.  Like us, Edison was also interested in electric cars, and in 1901 he developed a iron-nickel battery.  In a case of buzzwords being right for a reason, the Stanford team used the same elements as Edison, but structured them on the nanoscale.  Edison’s original design sounds like it was essentially just one alloy of iron and carbon for one electrode and one of nickel and carbon for the other electrode.  The new battery consisted of small iron pieces grown on top of graphene (that wonderful form of carbon we’ve talked about before) for the first electrode and small nickel regions grown on top of “tubes” of carbon (which probably means nanotubes).

The new battery is 1000 times more efficient than traditional nickel-iron batteries, but the improvement means it only now is about equal to the energy storage and discharge abilities of our modern lithium ion batteries.  Although there’s lot of research being done on improving our lithium ion batteries, there are some unique advantages to the nickel-iron batteries.  For one, there’s a lot more iron and nickel than lithium, meaning the batteries could be cheaper.  Nickel-iron batteries also don’t contain any flammable materials, while lithium batteries are capable of exploding.  While the nickel-iron batteries might not appear everywhere, their inability to explode could be a boon to electric car manufacturers.

Tweets from Space

In honor of the Curiosity rover landing on Mars in less than 24 hours (knock on wood), why not check in on its tweets to see how it feels?  You read that right.  NASA has set up a Twitter account for the Curiosity rover.  I was about to declare this the first ever official Twitter for a scientific experiment (while CERN has an account, it’s for the entire organization, and all the LHC accounts are made by enthusiasts).  However, it turns out Mars tweets are old hat for NASA, which set up an account for the Phoenix mission back in 2008.

Since robotic intelligence is not advanced enough yet for space rovers to actually talk to us, @MarsPhoenix and @MarsCuriosity are actually run by NASA’s social media team.  But it looks like they hope to have updates in almost real time as Curiosity gets ready for major milestones (at posting time, Curiosity last reported getting closer to Mars than the moon is to Earth).

I think these Twitter accounts are great moves by NASA.  Twitter’s character limit is a great way to send bite-sized mission updates to the broader public, and judging by its follower count, people love it.  Sometimes science in social media gets a bit of flak (I remember a few people saying the CDC’s zombie comic was a money waster), but I’m all for new ways to reach out to the public.  Important results should be vetted by peers before we accept things as fact, and the CDC comic was really only super helpful if you read the emergency preparedness guide with it, but there’s nothing wrong with getting people to care about this stuff in the first place by giving them a taste of excitement.

Hat tip:  Mars Orbited Adjusts, Rover Gets Twitter Account from Greg Laden on Science Blogs

In Which the New York Times Astonishes Me

I’m a bit late, but I did want to respond to a New York Times editorial that seemed to produce the most jaw-dropping responses in recent memory.  Fortunately, a Washington Post blogger has already produced a wonderful response that summarizes most of what I want to say.  But there is a bit I want to add.

  • Hacker’s selection of “other schools” to prove math is too important in college admissions struck me as ironic.  Rice, WashU, and Vanderbilt are all top  20 schools.  They also all have fairly big engineering programs, so that would skew the math score of the “average admitted student” higher.  If Hacker truly wanted to show readers that math is over-emphasized in admissions, he should have considered looking at SAT scores from state schools, which are obligated to take on more students as public institutions (but again, make sure to not take schools which are incredibly competitive, so rule out ones like UC Berkeley, UT-Austin, or Michigan), or scores for liberal arts colleges, which tend to place less emphasis on SAT scores.
  • The poor logic here can be used on any subject taken for multiple years in high school.  By his logic, why bother with more than one year of English? It’s just reading different books and doing increasingly more rigorous analysis.  One could argue you don’t really need to understand the difference between a metaphor and a simile unless you’re going into English, linguistics, or some other language-focused fields.  You could just do a single year covering important literature and another year on grammar and composition.

This op-ed seems to follow what I consider a worrying trend in science education.     Many people seem to think science education needs to be more “practical”.   I’ve heard of middle and high schools that are magnets in broad topics like “sustainability” and “health”.  While things like art and science magnet programs make sense to me, because it basically means a school has additional resources like extra lab equipment or more instructors for specialized classes, I don’t get how you teach something as interdisciplinary as health to a high school student still taking basic biology, chemistry, and social studies without taking away from the more general concepts of these fields.

I remember an LA Times piece several years ago about a sustainability magnet program that had kids growing a garden in biology and somehow tying that into every class.  As Wilingham points out, what happens to students when they need to do something besides botany in biology?  But I also wonder if this early, practical education has another downside.  If a student doesn’t like the application the class focuses on, will they still consider liking the subject?  At my undergrad school, we didn’t have a singular biology department; we had an ecology department and a molecular biology department.  I had several friends in both, and I could certainly see my molecular biology friend interested in genetic engineering being completely bored by growing and observing plants as well as my ecologist friend hating a medically-oriented biology class.  Our current, “grab-bag” science education system might not be the best, but I feel that we’re more likely to get students interested and educated in science by introducing them to basic concepts and applying them to everyday life instead of having their first taste be a specialization.


Peter Thiel versus Google

So the tech world is abuzz with the CEO equivalent of a catfight.   At a dinner in Aspen, Colorado (which seems to have cool meetings like every other week over the summer), Peter Thiel (co-founder of PayPal) and Eric Schmidt (chair of Google) were members of a panel discussion.  Thiel launched the first strike, saying “Google is no longer a technology company“.  Google is known for having a large amount of cash in its coffers, but spends little of that on research and development.  Thiel argues Google is just sitting on it because the company “is out of ideas”  and accused Microsoft of being in a similar position.

This seems to fit into Thiel’s more pessimistic view of the future of technology.   Unlike most people in this day and age, Thiel thinks the pace of scientific and technological development is slowing down.  Though I think that might only be true based on his definition of progress.  Thiel seems to think only “disruptive technologies” are meaningful.  Improving on existing technology doesn’t seem relevant to him, as he said Google is stagnating by sliding by on search, and his venture capital firm’s motto is “We wanted flying cars, instead we got 140 characters.

Thiel’s argument struck me as weird for two reasons.  First, I’m not really sure why he only called out Google and Microsoft.  If any tech company really strikes me as sliding on older technology, it’s Apple.  Yes, yes, Apple makes wonderful devices.  But very little in any of the Apple products released in the last decade feature new technology.   Look at the iPhone.  Touchscreens have been in various products for decades.  The music store and player is iPod with a touchscreen.  The main innovation was the idea of combining these things (which had been slowly coming together) and adding the app store.  Apple is great at designing attractive and easy-to-use technology and developing good software to go with it.  But it didn’t require any great basic research breakthrough in hardware or software to go from the iPod, Nintendo DS, and a cell phone existing separate to get to the iPhone.  To my knowledge, Apple does no basic research, and it’s known for spending less than most companies on overall research and development.  Google and Microsoft have basic research labs.

Secondly, I’m not sure there is any “tech” industry  in Thiel’s mind when you compare Google and Microsoft to other companies.  Both are in the top 10 for R&D spending in terms of total spending and percentage of revenue.  I understand that Thiel believes in big, breakthrough innovations, and puts his money where his mouth is by investing in lots of research himself.  But I feel like he doesn’t fully appreciate how very few companies are like him.  He’s called the Great Recession a symptom of technological underdevelopment not growing the economy fast enough to justify housing prices.  But can’t the fact that our economy placed so much of a bet on housing also indicate an inefficiency in the market (I know virtually no economics, so pardon the abuse of terms) that also leads to underinvesting in technology with few short-term payoffs?  Thiel himself seems to acknowledge this, as his new grant foundation, Breakout Labs, says it provides for the gap between federal funding and venture capitalists who want results on the market soon.