# In Praise of Social Science and Science Studies at NSF

Consider this a slightly belated reaction (and a slightly different take) to the proposed bill from Congressman Lamar Smith that would propose the National Science Foundation certifying all research is

1.  ”… in the interests of the United States to advance the national health, prosperity, or welfare, and to secure the national defense by promoting the progress of science;
2. “… the finest quality, is groundbreaking, and answers questions or solves problems that are of utmost importance to society at large; and
3. “… not duplicative of other research projects being funded by the Foundation or other Federal science agencies.”

I’d like to address why requiring ALL projects funded by NSF to meet all these criteria is bizarre (especially 3, preventing the funding of multiple research paths or even funding what would essentially be the  reproducibility of an experiment suggests Smith literally does not know how scientific research is done), but I’ll save that for a future post. But for now I’d like to address what seems to be some of the underlying motivation of the first criterion by looking at the projects Smith seems to be concerned by.

Smith is referring to the mission statement of the NSF from the legislation that created it: “to promote the progress of science; to advance the national health, prosperity, and welfare; to secure the national defense…” But his criterion just focuses on the benefits of technological progress as entities in themselves and ignores the broader ecosystem of science and engineering which they come from. I kind of sense this in Smith’s letter to the Director of the NSF, where he asks that NSF provide the reviews from the following approved grant proposals:

1. Picturing Animals in National Geographic, 1888 – 2008 (awarded $227,437) 2. Comparative Histories of Scientific Conservation: Nature, Science, and Society in Patagonian and Amazonian South America (awarded$195,761)
3. The International Criminal Court and the Pursuit of Justice (awarded $260,001) 4. Comparative Network Analysis: Mapping Global Social Interactions (awarded$435,000)
5. Regulating Accountability and Transparency in China’s Dairy Industry (awarded \$152,464)

3 struck me as funny, because evidently Representative Smith finds understanding the efficacy and perception of international legal systems to not be relevant to the welfare of the United States. 4 also seemed weird because in the abstract they made a direct case of how mapping social networks could be used to better prevent disease and also how understanding social networking is important in understanding the spread of phenomena like the Arab Spring.

Projects 1, 2, and 5 are studying science and technology in society. Looking at pictures seems trivial, but it’s more than that. National Geographic is a major popular science publication so it can be representative of other trends. The project is seeing how changes in science affect the portrayal of nature and views of conservation. Project 2 sees how an aspect of environmental policy has interacted with society. Conservation is now a major component of American policy, at all levels of government. Project 5 looks at a major application of technology: regulating food production. And they’re specifically interested in how technical practices are chosen. Did Smith forget about the melamine scandal? Understanding how these policies affect people and how they come about actually seems relevant to improving regulatory systems, which would seem appealing to someone who hopes to make government more efficient.

All three of these projects also want to see and produce work about how scientific knowledge diffuses to the life of everyday people. That is actually incredibly relevant to the point of NSF (it’s the whole broader impacts part they require researchers to have) because it’s part of how you justify spending taxpayer money. Part of the wonder of science isn’t just all “whizbang we have iPhones with video cameras to talk to mom and can communicate with satellites in orbit to figure out where you are”, although the Chinese regulation project also kind of covers this too (although it’s more commercial industry than consumer). But shouldn’t we be proud that most people in our society now actually learn about the history of Earth and life and learn some of the basic science that explains everyday life?

So these things all can affect our lives, and it makes sense for the NSF to fund projects that study the social aspects of science. I’m also going to say I’m not sure most of Smith’s supporters have even really read these abstracts since they never seem to know about the broader impacts or the actual research project, and mostly focus on how funny the titles sound.

# We DO Use Math… Kinda

A recently published survey looks at how often Americans use math in their jobs. And after looking at the data, I think Andrew Hacker (of “Is Algebra Necessary?” fame) should look at it. Although The Atlantic piece seems to be spinning it as “Look how little math we use”, I honestly think it goes against the grain of the argument algebra-skeptics were making last year.

Look at the graph. Nearly 20% of Americans use algebra in their jobs. True, that’s definitely short of a majority. But it’s not some rarified, elite stratum of the population. In a typical math class of around 30 students, 6 of them are going to use algebra regularly in their work. That’s probably a lot higher than the number of students having job tasks related to doing explications of text in English or learning gas laws in chemistry. Also notable is the breakdown by job category. Blue collar jobs aren’t much lower in their algebra usage than in white collar jobs, and blue collar trades actually surpass white collar management in algebra usage.

So what does it mean? Lots of stories seem to be running with “clearly not many people use algebra”. But I’d say trying to make a subject that 1 in 5 people regularly use just an elective sounds like a bad move. If algebra were just an elective, it seems likely that lots of kids who aren’t doing college preparatory work in high school may never take it, not realizing that it could be relevant to a decent number of jobs they’re interested in (especially if algebra becomes depicted as something only scientists need).

Does that mean everyone should learn algebra before high school or everyone should take four years of math, a policy proposal that is commonly criticized? No, but that’s a different discussion than just ripping algebra out of the core curriculum. I think it’s perfectly fine if student waits until high school to take algebra if they have more difficulty with math. And I don’t think students should be required to take 4 years of math in high school, but that’s still an uncommon policy. But for now, I’d just like everyone to acknowledge that someone using algebra on the job is a person in your neighborhood.

# New Page at the Top!

So there’s been something I’ve been wanting to start for a while, but waited until I learned more about how WordPress works (I am clearly not a web designer yet). You’ll notice a new link at the top titled “Trivial Explanations” (and also a new blog category with that name). In addition to posts where I look at science or its applications in the news, I also want to start giving some explanations of the concepts that are commonly referred to in science/tech journalism stories without much explanation. For instance, in the post on masers, many of the articles I linked to mentioned that masers could be good amplifiers for cell phones but didn’t explain why, while I briefly mentioned the relevant property was stimulated emission (the SE in LASER or MASER). I also plan on explaining things that might not be relevant to further off applications, but just appear in lots of papers anyway, to build up your background in seeing how things might be related (for example many materials are “doped”, but that is almost never explained in news pieces because it’s a basic step in the process). If you follow XKCD, think of it as a more practical (or less awesome) version of What If?

With that in mind, I need things to explain! So if you can think of anything you hear about in the news or in your life that you don’t understand, please send me your requests for desired explanations. For now, leave a comment on this post. I’ll try to come up with a better system in the future. (I’m a bit paranoid about linking my email)

# Revising the Universe

This week, scientists affiliated with the European Space Agency’s Planck observatory announced several discoveries from the first 15 months of Planck observations. Planck observes the cosmic microwave background (CMB) radiation. The CMB is the oldest light we can observe in the universe (from about 380,000 years after the Big Bang), because it comes from the time when neutral atoms finally formed and stabilized and photons were no longer constantly absorbed by free electrons and protons. Because of it’s age, studying the CMB enables astronomers and cosmologists to look at the early structure of the universe.

The newly released Planck data contains a few surprises, some vindications of previous work, and many things that are a mix of both. One of the things I found most interesting was the newly calculated age of the universe. Based on the Planck observations alone, the team predicts the age of the universe to be about 13.82 billion years. What’s great about this is that it falls exactly within the resolution of the previously predicted age of the universe by NASA’s WMAP data, 13.73 billion years +/- 0.12 billion. The error bars on the WMAP data mean that anything within 0.12 billion (120 million) years of each other is pretty much indistinguishable from each other. That the Planck data falls within in it means our observations and models seem to be very good at describing the universe.

What’s even more interesting (at least to me), is the stuff that doesn’t entirely jibe with our understanding of the universe. Sure, the age is a bit different because of a change in when dark energy is believed to kick in (the force that is causing the accelerating expansion of the universe that was discovered in the late 90s), but the slight change is practically bookkeeping compared to the other things. When discussing fundamental physics, I mentioned one major kink in our theories is that there seem to “preferred” directions for giant globs of stuff to clump together in the universe. The Planck data shows many deviations from randomness that WMAP found still hold and weren’t just caused by limits in WMAP’s data.

So what do we have?  The big deal is that the universe seems to be off-balance. If you look at the image below, comparing Planck’s data to the model, the left and right side seem to have different brightness. Since the brightness of the CMB is related to where mass would accumulate, this would also mean there’s more stuff in one half of the universe than the other, based on what we can see. It’s also worth noting that Plait says the distribution of hot and cold spots still seems random; it’s just the intensity that isn’t.

Image of deviations of Planck’s data from the standard cosmological model. Credit: ESA and the Planck Collaboration

Another quirk is the so-called CMB cold spot, a region initially found in WMAP data that was both larger and colder than expected for a random distribution. In recent years, some people challenged its existence and said its uniqueness might be due to how WMAP’s data was analyzed, but it still holds up in the Planck data release (although I can’t find out if the Planck team used the same statistical analysis as WMAP, so the Michigan scientists might still have a point).

So what do these mean? Well, a popular theory for each of these that these are “imprints” from another universe. If you’ve heard anything about string theory, you probably know that it requires the existence of many other dimensions (10 or 11, typically, depending on the exact form). In some versions of string theory, our 4D (3D-space + 1D time) universe can move around in this higher-dimensional space called the bulk and it could also potentially interact with other universes.

This’ll be an exciting time for cosmologists and physicists as they try to reconcile their theories with the new observations.

PS: I can’t find if Planck shows anything about the “axis of evil” alignment or dark flow, which are other interesting structural observations. But both of them depend on large scale surveys like Planck (and dark flow was specifically based on CMB data), so I could see these being looked as people have more time to process the released data.

# Smallest Exoplanet Found

Astronomers recently announced finding a new exoplanet (a planet in another solar system). That alone wouldn’t be a big deal anymore, since in the last 20 or so years we’ve found more than 800 (with a few thousand other “candidates” requiring more study to verify). What’s special about this one is how small it is. Newly discovered Kebler-37b is only 3865 diameters wide, making it smaller than Mercury and barely larger than the Moon.

Aside from setting a new record for smallness, it also represented a unique experiment. Kepler-37, the star the planet orbits, has not been studied much and so astronomers were uncertain of its size (both volume and mass, is my understanding). One way to determine this is seeing how the stellar equivalent of seismic waves behave in the stars interior (a discipline called “asteroseismology“). It turns out that for most stars, the way it oscillates is also linked to its size. You might wonder how this is possible since, on Earth, seismology is a fairly hands on affair with detectors everywhere. With stars, we can look at their light. If you split the light from a star into all the colors (giving you a “spectrum”), you’ll see dark lines appear at some spots.

Spectrum of the sun

These dark lines are where the light is absorbed by the atoms in the star. If you have really sensitive equipment to read the spectrum, you’ll see that these lines actually move slightly over time. This is because of that ubiquitous feature of waves, the Doppler effect. Just like an ambulance siren sounds higher pitched when approaching you and deeper when it passes you and moves away, light does the same thing. So if a segment of a star is expanding toward you when the light is emitted, we see the spectral lines at higher frequencies (or “blueshifted” as it moves closer to the blue end of light) and a shrinking section has lines at lower frequencies (or “redshifted”).

The other cool thing about this discovery was who funded it. The asteroseismology work was not funded by NASA, but actually by a crowdfunding project called Pale Blue Dot. The organization has people “adopt a star”, but instead of pretending to let you name it and making you pay for an expensive diploma (*hem hem*), the money star adopters give goes to fund research groups working with data from the Kepler mission.

# Modern Physics Isn’t All or Nothing

My physics crush, Lisa Randall, was recently interviewed by New Scientist about a “Theory of Everything”.  I feel like there’s some context we’re missing, because the first question (“Doesn’t every physicist dream of one neat theory of everything?”) seems really abrupt. But I like her answer. I might quibble and say I think physicists generally hope there is a “theory of everything”, but it definitely doesn’t drive all work. Work on a theory of  everything is just one branch of physics. There’s also a lot of work that doesn’t depend on a theory of everything (biophysics and condensed matter physics are still trying to work out how to basically go from quantum mechanics to everyday life still) and other work that is important to gathering observations that a theory of everything needs to explain (like astrophysical and cosmological explanations showing there might be preferred directions for structures in the universe). Asking this question would be like asking a heart surgeon if fully understanding the human genome is her dream. It might help her job a bit, but there’s a lot of other problems in her field that also need to be solved and it doesn’t really influence her work.

I also liked her argument against mathematical beauty. Math can guide physics, but empirical observation is also important.  When we moved from a geocentric to a heliocentric model, one of the problems with the heliocentric model was that it didn’t accurately predict where planets were in the sky. This was because Copernicus assumed orbits around the Sun had to be circular, because of obsessions about the “perfection” of circles.

# When Less is More, part 2: Flipping the Sign

In part 1, we looked at absolute temperature scales.  Before an aside about not setting your kitchen on fire, we defined absolute zero as the point where atoms have no more energy from their motion (or kinetic energy).  We’ve now also nicely set ourselves up for the “big deal”.  If absolute zero is where there’s no kinetic energy, how do you get below that?  Can atoms somehow have negative kinetic energy?  The simple (and relevant to this discussion) answer is no.

Instead, we need to add something we’ve been neglecting from our discussion of temperature.  So far we’ve only talked about heat.  But another concept is used in the “relevant” definition (I’ll have more to say on this below) of temperature for this experiment:  entropy, which is basically the amount of disorder in a system.  If you took an advanced physics or chemistry class in high school (or maybe a first-year class in either of those fields in college), you might have learned the second law of thermodynamics, which states that entropy tends to increase, and must increase in closed systems.  It also defines how entropy changes in physical processes:  at a given (absolute) temperature T, if an amount of heat dQ flows into a system (we can also use a negative sign for heat flowing out), then the entropy S changes by an amount dS by

$\Large dS=\frac{\ dQ}{\ T}$

Of course, definitions can sometimes work both ways, and that’s what physicists decided to do.  Based on this, they solve for T, and we “officially” define temperature as

$\Large T=\frac{\ dQ}{\ dS}$

And so with this definition, we can see where a negative temperature comes from: anytime the heat flow and entropy change have different signs, T should be negative.

We can also be more general in that last equation, and look at temperature as a function of how entropy changes with respect to energy (if you’ve taken calculus, I mean the derivative of entropy with respect to energy), and temperature is

$\Large \frac{\ 1}{\ T}=\frac{\ d}{\ dE} S(E)$

Of course, “anytime” would still be rare in our everyday experience.  At the low energies and macroscopic scales of normal human life, heat (or energy) and entropy increase together.  But scientists can set up quantum systems (read: basically any system on an atomic scale) where we break that trend.  This is where a more basic understanding of entropy comes in.  If you have a system where atoms can have two different energies, the greatest entropy is when half of the atoms are in each energy state.  This is because that allows for the greatest number of combinations of atoms.  Look at this plot of the number of total possible combinations for making a combination of k atoms from a set of n total atoms based on this formula  , in this case where n is 50.

Note the giant peak at 25, or n/2

As you can see, the number of combination explodes close to the halfway point.  Also note the symmetry of the combination curve in this case.  So let’s say with our 50 atoms, k equals the number of atoms in the second, higher energy state.  (For those who know about electron orbitals, realize here that we’re not talking about the energy levels of the electron; we’re actually talking about the energy state the whole atom.  This can come up in systems where atoms are placed in magnetic fields.)  In most everyday systems, you’d have a lot of atoms in the lower state, and so we have a small k.  Adding energy kicks atoms up to the higher state, so k increases, and you see the number of possible configurations (and therefore the entropy), also increases.  But we have the opposite happen if we were to start with a high energy system, where k is already close to 50.  In that case, adding energy would decrease the entropy, and so by the third equation, we have a negative temperature.  Ta da!

The idea of a “simple” energy state system having a negative temperature is actually kind of old hat; i.e., we did it already.  (In fact, if you know how lasers work, this basically describes the population inversion of electrons; we just don’t typically ascribe a “temperature” to electrons confined in solids.)  So what’s the big deal about this new research?  While the atoms were in a gas state, the team was able to get them in a lattice arrangement in space and actually prevented motion of atoms.  By rapidly adjusting the magnetic field and lasers to trap most of the atoms in a high energy state, they made what should have been an energetically unstable arrangement (like a pencil balanced on its tip) stable.

Now that we have a spatially spread out system in a negative temperature, we can also test lots of interesting physics.  For example, the combination of stability on millisecond time scales (which is LONG for particle physics) and high energies can allow for unique probing of the Standard Model, such as the formation of structures defined by the strong force.  The researchers also point out that in this negative temperature regime, they also experience negative pressure, which is similar to the force believed to drive the expansion of the universe.

So that paragraph ends the NEW science, but I promised I would go more into the definition of temperature.  Or more accurately, why many people (look at the comments  on the review) seem to complain about how this really isn’t a negative absolute temperature.  Especially as those slightly more in the know freak out when they learn that systems with negative absolute temperature behave as if they were hotter than a temperature of positive infinity.  I’ll explain that confusing fact briefly.  If you put a negative temperature object next to a positive temperature object and there’s no energy source, heat will flow from the negative temperature object.  As I mentioned, the 2nd Law of Thermodynamics says entropy increases in closed systems.  Adding energy to the negative temperature system would decrease the system entropy (positive temperature systems lose entropy when they lose energy, and the negative system loses entropy when it gains energy).  To maximize the entropy, heat would need to flow out of the negative temperature object to the positive temperature object, bringing both objects closer to that giant entropy peak in the middle.

Those who complain that the temperature is only negative because of a weird definition resort to arguments about kinetic energy.  Unless you take a course on statistical mechanics, you almost never see the definition of temperature we just derived here.  Instead, you typically talk about how temperature is a measure of the average kinetic energy of the atoms in a system, and absolute zero is then described as the point where all atomic motion stops.  This then leads people to an obvious question, “How can atoms move less than being stopped?”

It’s a good question.  The problem is that the basic definition people are using isn’t the right one.  As one of the wonderful SciBloggers explains, the kinetic energy is important, but not in the way that simplification leads most people to believe.  It’s not the mere average of atomic energies that matters, it’s the nature of the probability distribution of energy around the average.  In statistical physics, this is described by the (Maxwell-)Boltzmann distribution.

Plot of number of atoms at each velocity for a variety of temperatures based on the Boltzmann distribution (given in Celsius, not Kelvin). A plot for energy would be similar. (From Wikipedia)

In the figure, you’ll see that the average velocity does increase with temperature.  Another trend is that the distribution widens with increasing temperature.  And if you analyzed the graphs, you’d see that most atoms have less energy than the average energy.  For a negative temperature system, these features are slightly tweaked.  The average energy increases with temperature (so -1 K has a higher average energy than -100 K).  The distribution widens at lower temperatures.  And the big deal is that most particles in a negative temperature system have a higher kinetic energy than the average.  This would look like the graph of the Boltzmann distribution if we say the temperature is negative, and so that’s why we go with that.  So it’s not that physicists have lied, it’s just kind of bizarre compared to our normal experience.

There’s also one slightly more intuitive explanation that I’ve been using to explain the concept of negative temperature.  First, I need to dispel another aspect of the definition that people are confused on.  Atoms at absolute zero don’t have zeroes of everything else.  Quantum mechanics says there is a zero-point, or minimum, energy that all objects have in a system.  And I’m pretty sure the uncertainty principle requires anything with a non-zero energy to have a momentum, which means it must move.  Instead, physicists define absolute zero as a minimum entropy point.  In order to minimize the entropy of a negative temperature system, we have to add energy to it.  In other words, we have to ADD heat to a negative temperature system to bring it to absolute zero.