Is It Time for a Science Debate?

While running errands yesterday, I realized it was 2 PM on a Friday, and I immediately switched my car’s radio to NPR to listen to Science Friday.  (Aside:  I think Science Friday is one of the best science programs for the general public.)  Yesterday, they had a particularly interesting segment about science in the presidential campaign, or more specifically, the lack of it.  An organization called Science Debate is hoping to get Mitt Romney and Barack Obama to participate in a debate with questions focused on science and technology issues.  They started in 2007, and although they failed to get a full “science debate”, they did manage to get the Obama and McCain campaigns to write responses to their questions.  They haven’t gotten much farther this round.

The lack of progress makes me wonder, do we need an extra science debate?  One part of me says yes, but another part doubts that.  The fact that the 2008 campaigns took the time to answer Science Debate’s 14 major questions impresses me.  I’m not sure I would expect much more this time around, but it also seems kind of unnecessary.  The written answers of the Obama and McCain campaigns are actually very thoughtful for campaign rhetoric.  I’m not sure we would see responses with half as much detail in our mediocre presidential debates on television.

Having read through the answers, it’s clear that what a presidential candidate says about science policy means nothing on its own.  A president can help lead, but if Congress doesn’t want to follow, nothing will happen.  Since most science issues are matters of funding and regulation, the power is nearly all in the legislative branch’s hands.  The president can really only influence how strictly executive agencies like the EPA, Department of Interior, and Department of Energy enforce regulations or propose some limited programs.  If we want to see more science in government, it probably won’t happen until we start electing more scientifically knowledgeable people to Congress.

I also had one major criticism of the segment.  A common statement was about how little scientists and researchers engage with the public, and how they need to do more.  While I agree with this statement (clearly I made this blog for something), it was disappointing to not hear any of the panelists mention the reasons scientists don’t reach out to people.  Even though universities claim to value “public service” from professors, it almost never is important when faculty come up for tenure.  A lot of academics also tend to look down on popularizers of science, thinking it tends to dumb down research.  When I interviewed a chemistry professor at my undergrad institution, he told me how his colleagues felt he wasted his time on his outreach and education projects, even though he had been considered one of the most influential chemists of the last decade.

It’s also really hard to get opportunities to explain work to the public that doesn’t explain the “big picture” and “you would be hard-pressed to find volumes on condensed matter physics, biophysics, the physics of “soft” matter like liquids and non-linear dynamics. And yes, these are bonafide fields of physics that have engaged physics’s best minds for decades and which are as exciting as any other field of science.”  As a blogger from the wonderful ScienceBlogs points out,  this has become a vicious cycle: ” The high-profile topics are attractive to the general public, so they buy books about those subjects, so publishers publish more books about those subjects, which makes those areas more popular, and so on.”  (Another aside:  I’ve always felt there’s a great book waiting to be written about how there are many aspects of everyday life we don’t understand, like where turbulence comes from, or how we don’t fully understand the behavior of water molecules on large scales)

On the non-physics end, the SciFri panelists also admit that the public mainly thinks of medicine when they think of science, and that’s also kind of telling of public support for “science” if that’s all they know.  The National Institutes of Health already get most of the research money in government, but honestly, a lot of the big breakthroughs in medicine of the last century originally came from unrelated fields (radiation therapy and virtually all the imaging techniques come from physics, and genetics is becoming heavily influenced by statistics and computer science).  This also poses lots of problem for science PR.  Most “science” news and policy people I’ve seen focus mainly on health, which, while important, is only one aspect of science (and only one area of where your tax dollars go in research).  So to hear the ScienceDebate CEO and the director of public affairs for the American Physical Society not offer any ideas for innovative science outreach, let alone mention any of the the problems people face in trying to reach out, felt like a disservice to aspiring civically-minded scientists.

Quick Updates

A friend showed me a very cool Kickstarter that hopes to take everyday people’s experiments (and those of a few scientists) up to space!  Check it out here.

NPR Science Friday had an interesting segment today about a movement hoping to have a “science debate” between the presidential candidates.  A recording of the segment and additional reading are on the Science Friday website.  I hope to post a response to it soon.

And the Olympics just started!  NBC has a series of videos called “Science of Olympics” looking at science and engineering that goes into the sports and equipment the athletes are using.  If you start here, you’ll see the other Science of Olympics videos in the related videos category.

O Pioneers!

Some practically ancient technology has recently been used to rule out some exotic theories of physics.  The Pioneer 10 and 11 probes, which were the first successful space missions to Jupiter and Saturn, respectively, have still provided data since their launch in the 1970s.  After the Pioneers observed their target planets, they still transmitted radio signals back to Earth, and NASA would also try to bounce radio waves off the probes.  Similar to how you can tell if an ambulance or train is rushing towards you or moving away by the change in pitch, NASA could use the Doppler effect to measure the velocity of the probes along their paths.

Astronomers wanted to use this data to study the influence of gravity farther out in the solar system, where the mass of the Sun and outer planets wouldn’t overwhelm smaller effects.  Of course, studying “small effects” in deep space required accounting for many variables.  John Anderson, the lead analyst of the Doppler data, started to notice small differences between his model’s predictions and the Pioneers’ actual velocity and position data beyond Neptune’s orbit.  The probes were experiencing a small acceleration towards the Sun, slowing them down.  By small, they mean really small:  the acceleration was calculated as being 8.74 x 10-10 meters per second per second.  (To help put that in context, it would take over 400 years of constant acceleration of that amount to get up to the average person’s walking speed)

For the probes to be experiencing that acceleration means they need to be experiencing some sort of force.  Anderson’s team originally wrote it off as unexpected gas leaks from the Pioneers’ thrusters.  But the anomalous acceleration persisted even after the thrusters should have run out of fuel.  And that started to concern astronomers.  Anderson’s model accounted for all known gravitational sources in the outer solar system, and even more complicated effects such as radiation pressure from the Sun.  So something astronomers didn’t know about must have been the cause of the anomaly.

Astronomers and physicists have come up with dozens of interesting explanations for the Pioneer anomaly that would change our understanding of the universe.  Some have proposed previously unknown clusters of dark matter in the solar system, but this was ruled out when the Voyager probes never showed this acceleration and our models of planetary orbits never showed such problems.  Others proposed rewriting gravity.  Ironically, an alternative theory to dark matter, known as Modified Newtonian dynamics (or MOND) was also invoked to explain the anomaly, as it proposes that gravity behaves differently at extremely low accelerations (intriguingly, also on the order of 10 to -10 meters per second per second).  Others proposed that time passed at different rates for objects depending on their acceleration through the gravitational field of space, and Anderson’s group actually considered these theories for a while (in some way I truly don’t understand).

And others proposed less exciting explanations.  Tiny amounts of gas could be produced and leak from the Pioneer’s nuclear power sources.  And some thought even the tiny amounts of heat radiating off the probes could cause push the probes off course.   No one seems to have taken the gas theory very seriously.  And Anderson’s team (now working with Slava Turyshev) wrote off the heat theory in the early 2000s, saying any force based on the heat should have decreased with distance from the sun.

Viktor Toth, a programmer who seems to do theoretical physics in his free time (seriously), helped change their minds.  Toth argued that Anderson’s team couldn’t rule out thermal effects unless they did a detailed analysis.  In an interesting story showing the merits of preserving old research data, Toth helped Turyshev save or find copies of nearly all the Pioneer mission’s Doppler and temperature telemetry.  Somewhat amazingly, the team (now with Toth) managed to recreate a full CAD model of Pioneer 10 based on the original engineering drawings and using the temperature data, ran a computer simulation measuring the thermal emissions in all directions.  Their model showed that the radiation could be responsible for all but about 20% of the acceleration.

The model seems pretty convincing.  But their figures look like they show the “thermal recoil” effect decreasing further from the sun, while the anomaly still seems constant.  So maybe there’s still room for some anomalous physics.

Walling in Wi-Fi

A roll of wallpaper is shown. A pattern of triangles arranged in hexagons is seen on the paper.A world of ubiquitous wireless networking poses numerous opportunities for theft.  Hence the fancy new trend of RFID wallets to secure credit cards.  Of course, new credit cards aren’t the only thing people want to protect.  Whether you’re incredibly concerned about someone hacking into your home wireless network or maybe you own a Panera and wish everyone at the neighboring burger bar would quit stealing your free Wi-Fi (I might have done this when working out of town one summer…), the stumbling block to applying this to your home was our society’s inability to make wallets that can hold people.  Or maybe you didn’t feel like making your home  follow neo-survivalist decor by covering your walls in chicken wire and/or aluminum foil (I’m scared to link to sites on that one, so just Google that on your own).

A more detailed view of the pattern is shown. It is a set of triangles, with a more ornate pattern in each.

Detailed Faraday cage pattern of ink

So leave it to the French to create a fashionable Faraday cage.  Researchers at the Grenoble Institute of Technology have made a wallpaper containing electrically conductive ink.  The ink, which contains silver particles, forms a snowflake pattern on the wallpaper.  This pattern is similar to something like chicken wire and so it forms a selective Faraday cage that prevents the electromagnetic waves of a Wi-Fi signal from transmitting through the paper.  It doesn’t affect the network in the room (or perhaps your house if you only paper exterior walls).  Even better, the team expects this to be no more expensive than original wallpaper.  And if metal snowflakes aren’t your thing, you can cover this with an additional layer of wallpaper without ruining the Faraday effect.

The researchers also make another claim about cell phone signals.  The original article is in French, which I don’t understand, but looking at it through Google Translate says they claim it doesn’t interfere with “emergency service” calls, but does block three kinds of signals.  Another English article I found says the wallpaper shouldn’t interfere with cell phone signals at all.  The claim about emergency services strikes me as weird because a call to 911 isn’t physically different from a call to your friend.  But I also don’t buy that it shouldn’t affect any cell phone signals.  If the photo is a somewhat accurate representation of the wallpaper, the pattern looks like there’s only about an inch of space between lines.  Since the “cage” isn’t solid, it does allow electromagnetic waves smaller than this space to pass through.  Wi-Fi uses radio frequencies in the 2400-2485 MHz range and cells phones work somewhere between 1700 and 2155 MHz (the exact range depending on your carrier).  Dividing the speed of light (300,000,000 meters a second) by the frequency, we see that the wavelength of Wi-Fi signals is about 12.5 cm while your phone’s signal has something between a 17.6 and 14 cm wavelength.  So based on my quick and dirty math, it doesn’t seem like either signal should get through ever.

P.S. The French researchers do also point out the survivalist/paranoid applications of this, saying you could also wallpaper your house in this to prevent some electromagnetic radiation from getting in.

Title IX is more than just sports

So in theory, I’ve claimed this blog will look at science and society, not just scientific advances.  And now I’ll finally make that true.  While staying at a hotel out of town earlier this week, I read the free newspaper of choice at hotels all over the country:  USA today.  What caught my eye was an op-ed with the headline “Girls don’t need Obama’s help with math”, written by Kirsten Powers, one of Fox News Channel’s liberal commentators.

Powers is responding to a recent effort by the Obama administration to use Title IX to ensure equal opportunities for both male and female students in science, technology, engineering, and math (STEM) fields.  One major part of her argument is “The End of Men” thesis.  While I’m definitely sympathetic to the idea that our education system seems to be failing a large population of boys (and as a male, my concern isn’t entirely unselfish), that doesn’t mean that the way our culture perceives STEM education and fields can’t also harm girls at the same time.

Powers cites several statistics about women earning degrees, but I’m not sure this all translates to women entering STEM careers.  Let’s break it down.

  • “women rule in biology with nearly 60% of all bachelor’s, master’s, and doctorates awarded to women”:  Okay, that actually probably speaks well for biology education (I tend to think gender ratios in the 60:40 ballpark are pretty reasonable).  But what becomes of all these women trained in biology?  Lots of biology undergrads end up going into medicine, which isn’t bad, but also is not as male-dominated a profession as science and engineering.  Biology doctorates are likely to become professors (look at Table 3).  If they’re becoming professors, though, academia isn’t known for being the most hospitable field for women.  For example, one study has suggested women win fewer science awards than would be expected based on how many doctorates they hold.  And the tenure track is notorious for making it hard to start a family.
  • “40% of bachelor’s degrees awarded in the physical sciences and math go to women”  This statistic actually is higher than I would have expected.  But again, what do these students do after school?  I don’t have official numbers for the trends, but I remember lots of my friends from college.  Of the five girls in my physics class, one decided to go into consulting and the other went to med school.  Out of the eight guys, only one of us wasn’t planning on doing technical work or going to graduate school in physics or engineering.  Of the four female chemistry majors I knew, two of them were pre-meds and one decided to become a teacher.
  • “72 % of [psychology] degrees go to women” Okay, there’s probably something pulling men away from studying psych, but that’s not equivalent to overcoming older prejudices that push women away from science and engineering.  Also, having known lots of psych majors of both genders in college, I would be willing to hazard a guess that male psych students don’t feel pressure due to their gender.
  • Just throwing out “bachelor’s degree” without description can be kind of misleading.  At some schools, students in science and engineering majors can choose between a Bachelor of Science (BS) and a Bachelor of Arts (BA).  If a school makes that distinction, one of these degrees will involve more technical coursework and lab work than the other.  (Typically a BS will be the technical one, but my understanding is that can vary)  People who get the less technical degree may be preparing more for medicine, law, or other careers besides science/engineering.
  • It also seems like something must be going on for women to represent only 18% of engineering and computer science bachelor’s degrees given the above stats.  Engineering, the E in STEM, isn’t something completely unrelated to math or science.  It’s applying science with design thinking to solve problems.  For women to represent such a large portion of pure science students while having a much smaller presence in the application of these sciences would seem to suggest that something more than academics is at play.

What does it all mean?  I’m not entirely sure.  I don’t think many US colleges are blocking women from studying STEM fields, and so in some sense, I probably agree with the central idea of Powers’ argument.  It seems to be a bigger cultural problem.  Does that require government intervention at the college level?  Maybe not.  But considering some of the uncertainty in how women transition from being STEM students to being STEM professionals, I’m not sure I’m ready to discount the idea of this program.  I would also say it seems naive to think we can have such a culture and not expect it to affect the behavior of some individual officials at the college level, such as an older professor who may discourage women from declaring a major in his department.  As we hear more stories of brogramming or just outright sexism in the tech industry, maybe it’s worth checking to see if that comes from the universities producing these workers.

Graphene, Heal Thyself

Computer image of graphene's hexagonal structure

Graphene’s perfect hexagons. From Wikipedia

One of the major stumbling blocks in nanotechnology is protecting materials.  When you’re making something on the scale of nanometers, just a few atoms being out of place can be a big deal.  This is where molecular self-assembly can be helpful.  Chemists have found many different structures that will spontaneously form interesting and useful structures just by essentially providing the right atoms.  In a similar vein, several nanostructures have been found to be able to repair themselves with little prodding from researchers.

Scientists at the University of Manchester (including one of the recipients of the 2010 Nobel Prize in Physics) have now found out that graphene is one of those structures that can self-repair.  This actually seems a bit unexpected, because I’ve not heard of anyone ever predicting carbon structures could self-heal.  Somewhat ironically, this was observed during research on etching graphene into specific shapes.  The Manchester team was trying to use metal atoms to cut-out specific shapes from a graphene sheet, as graphene’s electronic properties change depending on the the structure of its edges.

Graphene “healed” in two ways.  The first way was by dumping lots of extra carbon into the system near the graphene hole.  Carbon from hydrocarbon molecules was incorporated into the graphene layer by the electron beam.  This didn’t result in a “pure” graphene layer though, because the hole would typically be filled by 5- and 7-atom carbon rings instead of graphene’s hexagonal, six-atom rings.  An overabundance of spare carbon also tended to result in a semi-disordered, second layer of carbon growing on top of the graphene.  Even without hooking the holes up with extra carbon, graphene will try to repair holes under the electron beam.  The etching process would leave graphene with holes bordered by some 5- and 7-atom rings.  Just putting the edges of these holes under the electron beam without a carbon source seemed to “even” them out as carbon atoms moved into graphene’s characteristic hexagonal shape, although the hole may still remain.  Both processes also may incorporate the impurity atoms that caused the holes.

One could argue this doesn’t really restore the graphene, since you would still be left with awkward shapes or holes.  But these can actually still be helpful in etching graphene into custom shapes, especially since the whole point of the Manchester group’s work was to cause localized defects.  Years of work on chemically altering carbon nanotubes have shown that you can selectively dissolve 5-atom and 7-atom carbon rings without destroying the major 6-atom ring structure, and chemists can do weird things to the ends of 6-atom carbon rings.  I do wonder if it might be a bit harder to selectively dissolve the odd rings in graphene, though, because I imagine the curvature of a nanotube helps dissolution by putting strain on the atomic bonds and presents the bond more easily to a solution.  But if the selective dissolving still works on graphene, it seems like the team may already have accomplished their goal.

Mass Quarkier than Just Higgs

As a physics major, I feel contractually obligated to have a post about the Higgs boson since that has dominated science news for the last week.  But I honestly feel a bit unprepared for this, because my concentration was applied physics, and I decided to take electrical engineering courses instead of particle physics.  So while I know what things like bosons ARE, I can’t really work out all the math that goes into this stuff.  But one thing I do understand is energy.  And if you’re privy to Einstein, you may realize that means I can claim some understanding of mass.

If you’ve read some of the several thousand Higgs articles (if I had intentionally set myself for some pun based on “particle”,  I would be so awesome right now) written this week, you’ve probably heard something about the Higgs boson and/or field being responsible for all mass, based on something like this analogy.  If you’ve also checked physics blogs or sites, you may have noticed there’s a big asterisk to that statement.   The Higgs field, if the Standard Model of Particle Physics is right, is responsible for the mass of all elementary particles, which is a much smaller set of things than all mass in a universe of composite particles.

For what you see in everyday matter, the Higgs only affects quarks and electrons. Quarks are the elementary particles that make up protons and neutrons, which you might remember from chemistry or physics. For those catching on to particle physics’ wonderful naming scheme, you might be wondering how a “composite” can have more mass than the elementary particles it is made of. This is where Einstein kicks in.  Quarks are held together in protons and neutrons through an interaction called the strong force. This force (here’s that wonderful naming scheme again) is quite strong. Forces carry energy, and Einstein proved that energy E is equal to mass (m) times the speed of light (c) squared. The energy from the strong force holding a neutron or proton’s components together and energy from quarks moving around actually makes up about 90% of the mass we measure for protons and neutrons.

The strong force is actually kind of notorious for its high energy.  It’s also weird because the strong force actually gets stronger as quarks move farther apart and weaker as they get closer. This leads to another interesting phenomena with quarks: it’s practically impossible to observe an individual one.  If you try to pull quarks apart from each other, at a certain distance,  quark and anitmatter quark pairs form from the energy of the strong force between the original quarks, because it can actually take less energy (in the form of mass) to have more quarks than to have the strong force between two or three highly separated quarks.  The common analogy is that of a rubber band.  If you stretch a rubber band past a certain point, you end up with two (albeit broken) rubber bands.

Hello world!

Yes, this is how WordPress titles your first post anyway. But I thought “Hello world!”, the first command that people commonly learn when picking up a programming language, was an apt intro for a blog that wants to look at science, technology, and how society uses and understands these things.

Why call it “nontrivialproblems”?  Well, I did want there to be a space between “trivial” and “problems”, but that’s what I get for making a blog before understanding how to customize a theme.  But more seriously, “nontrivial” is a term that carries a lot of weight in science, math, and engineering.  The “trivial solution” to a calculation is the solution where all terms are zero.  So, yeah, that probably leads to a mathematically true answer for many equations, but it’s not really interesting from a theoretical or practical point of view.  But trivial  also goes beyond that.  Something immediately obvious or deductible from a proof or theorem can be considered “trivial”. Richard Feynman once joked that mathematicians consider any theorem “trivial” once it has a proof.

So what is nontrivial? The stuff we don’t really understand. String theory is nontrivial, in both a mathematical and cultural sense. Predicting weather. Epigenetics and proteomics can be considered nontrivial.  Any area where we still don’t understand a lot of the rules that govern systems is “nontrivial”.

There’s a flip side to that understanding of nontrivial work, though.  A lot of science and engineering work that isn’t at the cutting edge of theory is considered “trivial” by some researchers.  The thing is, a lot of this work is also the work that’s about ready to hit the real world and affect lay people’s lives.  How carbon nanotubes grow is pretty well-settled, and we can make lots of cool devices with them (computer transistors, low-resistance wires, drug delivery systems, and more), but we still haven’t found a way to get most of these technologies efficiently to market. We have drugs for West Nile and malaria, but not ways to make them cheap enough to prevent outbreaks in developing countries.

Feynman said, “No problem is too small or too trivial if we can really do something about it.” And so that’s what we’ll be working with.  Whether we’re looking at particle physicists finding the Higgs boson, chemists trying to understand the large-scale order of water molecules, or an engineer hoping to crank out a few more watts from a solar cell, it’s all fascinating in its own way.