Weirdly Specific Questions I Want Answers to in Meta-science, part 1

Using “meta-science” as a somewhat expansive term for history, philosophy, and sociology of science. And using my blog as a place to write about something besides the physical chemistry of carbon nanomaterials in various liquids.

  • To what extent is sloppy/misleading terminology an attempt to cash in on buzzwords? Clearly, we know that motive exists – there aren’t two major papers trying to narrow down precise definitions of graphene-related terms for nothing. But as the papers also suggest, at what point is it a legitimate debate in the community about setting a definition? “Graphene” was a term that described a useful theoretical construct for decades before anyone ever thought someone could make a real sheet of it, so maybe it isn’t unreasonable that people started using to describe a variety of physical things related to the original idea.
    • This contains a sort of follow-up: What properties do people use in clarifying these definitions and how much does it vary by background? Personally, I would say I’m way closer to the ideal of “graphene” than lots of people working with more extensively chemically modified graphene derivatives and am fine with using it for almost anything that’s nearly all sp2 carbon with about 10 layers or less. But would a physicist who cares more about the electronic properties, and which vary a lot based on the number of layers even in the lower limit, consider that maddening?
  • Nanoscience is very interdisciplinary/transdisciplinary, but individual researchers can be quite grounded in just one field. How much work is being done where researchers are missing basic knowledge of another field their work is now straddling?
    • For instance, when reading up on polymer nanocomposites, it seems noted by lots of people with extensive polymer science backgrounds that there are many papers that don’t refer to basic aspects of polymer physics. My hunch is that a lot of this comes from the fact that many people in this field started working on the nanoparticles they want to incorporate into the composites and then moved into the composites. They may have backgrounds more in fields like solid-state physics, electrical engineering, or (inorganic/metallic/ceramic) materials science, where they would have been less likely to deal with polymer theory.
    • Similarly, it was noted in one paper I read that a lot of talk about solutions of nanoparticles probably would be more precise if the discussion was framed in terminology of colloids and dispersions.
51cb2b3noc-l-_sx346_bo1204203200_

Oh my gosh, I made fun of the subtitle for like two years, but it’s true

  • Is the ontological status of defects in nanoscience distinct from their treatment in bulk studies of materials? This is a bit related to the first question in that some definitions would preclude the existence of some defects in the referent material/structure.
    • On the other hand, does this stricter treatment make more sense in the few atom limit of many nanomaterials? Chemists can literally specify the type and location of every atom in successful products of well-studied cluster reactions, though these are even pushing the term “nano”.
    • Is this a reflection of applications of defects at the different scales? (More philosophically worded, are defects treated differently because of their teleological nature?) At the bulk level, we work to engineer the nature of defects to help develop the properties we want. At the nanoscale, some structures can basically be ruined for certain applications by the mislocation of a single atom. Is this also a reflection of the current practical process of needing to scale up the ability to make nanomaterials? E.g. as more realistic approaches to large-scale nanotech fabrication are developed, will the practical treatment of defects in nanomaterials converge to that of how we treat defects in the bulk?

Using Plants to Turn Pollution into Profits

Once again, I may prove why I’m a poor writer, by burying a lede. But bear with me here, because this will come full circle and yield some fruit. You probably know that urban farming has become more popular over the last decade or so as local eating became trendy. As city dwellers started their own plots, people realized there might be a unique challenge to urban areas: avoiding lead poisoning. (Although a more recent study evidently suggests you’re getting less lead than people expected.) We used lead in lots of things throughout the 20th century, and it easily accumulated in the soil in areas exposed to high doses of some sources – so cities and areas by busy highways have lead from old gas emissions, old lots have lead from old paint, and even old lead pipes and batteries can leach lead into soils. There are other pollutants that can leach into soils in other places. Mercury and cadmium can build up in places where significant amounts of coal are burned, and many mining practices can result in a lot of the relevant metal leaking out into the environment.

Traditionally, the way to deal with polluted soil is to literally dig it all up. This has a major drawback, in that completely replacing a soil patch also means you throw out some nice perks of the little micro-ecosystem that was developed, like root systems that help prevent erosion or certain nutrient sources. Recently, a new technique called phytoremediation has caught on, and as the NYT article points out, it takes advantage of the fact that some plants are really good at absorbing these metals from the soil. We now know of so-called hyperaccumulators of a lot of different metals and a few other pollutants. These are nice because they concentrate the metals for us into parts of the plants we can easily dispose of, and they can help preserve aspects of the soil we like. (And roots can help prevent erosion of the soil into runoff to boot.) Of course, one drawback here is time. If you’re concerned that a plot with lead might end up leaching it into groundwater, you may not want to wait for a few harvests to go by to get rid of it.

But a second drawback seems like it could present an opportunity. A thing that bugged me when I first heard of hyperaccumulators was that disposing of them still seemed to pose lots of problems. You can burn the plants, but you would need to extract the metals from the fumes, or it just becomes like coal and gas emissions all over again. (Granted, it is a bit easier when you have it concentrated in one place.) Or you can just throw away the plants, but again, you need to make sure you’re doing it in a place that will safely keep the metals as the plants break down. When I got to meet someone who studies how metals accumulate in plants and animals last summer, I asked her if there was a way to do something productive with those plants that now had concentrated valuable metals. Dr. Pickering told me this is called “phytomining”, and that while people looked into it, economic methods still hadn’t been developed.

That looks like it may have changed last month, when a team from China reported making multiple nanomaterials from two common hyperaccumulators. The team studied Brassica juncea, which turns out to be mustard greens, and Sedum alfredii, which is a native herb, and both of which are known to accumulate copper and zinc. The plants were taken from a copper-zinc mine in Liaoning Province, China.  The plants were first dissolved in a mix of nitric and perchloric acid, but literally just heating the acid residue managed to make carbon nanotubes. Adding some ammonia to the acid residue formed zinc oxide nanoparticles in the Sedum, and zinc oxide with a little bit of copper in the mustard greens. What’s really interesting is that the structure and shape of the nanotubes seemed to correlate to the size of the vascular bundles (a plant equivalent to arteries/veins) in the different plants.

nanotube-from-mustard-greens

A nanotube grown from the mustard greens. Source.

But as Dr. Pickering said to me, people have been looking into to this for a while (indeed, the Chinese team has similar papers on this from 5 years ago). What’s needed for phytomining to take off is for it to be economical. And that’s where the end of the paper comes in. First, the individual materials are valuable. The nanotubes are strong and conductive and could have lots of uses. The zinc oxide particles already have some use in solar cells, and could be used in LEDs or as catalysts to help break down organic pollutants like fertilizers. The authors say they managed to make the nanotubes really cheaply compared to other methods: they claimed they could make a kilogram for $120 while bulk prices from commercial suppliers of similar nanotubes is about $600/kg. (And I can’t even find that, because looking at one of my common suppliers, I see multiwalled nanotubes selling on the order of $100 per gram.) What’s really interesting is they claim they can make a composite between the nanotubes and copper/zinc oxide particles that might be even more effective at breaking down pollutants.

I imagine there will be some unforeseen issue in attempting to scale this up (because it seems like there always is). But this is an incredibly cool result. Common plants can help clean up one kind of pollution and be turned into valuable materials to help clean up a second kind of pollution. That’s a win-win.

A Nobel for Nanotubes?

A popular pastime on the science blogosphere is doing Nobel predictions; educated guesses on who you think may win a Nobel prize in the various science categories (physics, chemistry, and physiology or medicine). I don’t feel like I know enough to really do detailed predictions, but I did make one. Okay, more of a dream than a prediction. But I feel justified because Slate also seemed to vouch for it. What was it? I think a Nobel Prize in Physics should be awarded for the discovery and study of carbon nanotubes.

One potential issue with awarding a prize for carbon nanotube work could be priority. Nobel prizes can only be split between three people. While Iijima is generally recognized as the first to discover carbon nanotubes, it actually seems that they have really been discovered multiple times (in fact, Iijima appears to have imaged a carbon nanotube in his thesis nearly 15 years before what is typically considered his “discovery”). It’s just that Iijima’s announcement happened to be at a time and place where the concept of a nanometer-sized cylinder of carbon atoms could both be well understood and greatly appreciated as a major focus of study. The paper linked to points out that many of the earlier studies that probably found nanotubes were mainly motivated by PREVENTING  their growth because they were linked to defects and failures in other processes. The committee could limit this by awarding the prize for the discovery of single-walled nanotubes, which brings the field of potential awardees down to Iijima and one of his colleagues and a competing group at IBM in California. This would also work because a great deal of the hype of carbon nanotubes is focused on single-walled tubes because they generally have superior properties than their multi-walled siblings and theory focuses on them.

No matter what, I would say Mildred Dresselhaus should be included in any potential nanotube prize because she has been one of the most important contributors to the theoretical understanding of carbon nanotubes since the beginning. She’s also done a lot of theoretical work on graphene, but the prize for graphene was more experimental because while theorists have been describing graphene since at least the 80s (Dresselhaus even has a special section in that same issue), no one had anything pure to work with until Geim and Novoselov started their experiments.

In 1996, another form of carbon was also recognized with the Nobel Prize in Chemistry. Rick Smalley, Robert Curl, and Harold Kroto won the prize for their discovery of buckminsterfullerene (or “buckyballs”) in 1985 and further work they did with other fullerenes and being able to the prove these did have ball-like structures. So while the prize for graphene recognized unique experimental work that could finally test theory, this prize was for an experimental result no one was expecting.   Pure carbon has been known to exist as a pure element in two forms, diamond and graphite, for a long time and no one was expecting to find another stable form. Fullerenes opened people’s minds to nanostructures and served as a practical base for the start of much nanotechnology research, which was very much in vogue after Drexler’s discussions in the 80s.

Six diagrams are shown, in two rows of three. Top left shows atoms arranged in hexagonal sheets, which are then layered on top of each other. This is graphite.

Six phases of carbon. Graphite and diamond are the two common phases we encounter in normal conditions.

So why do I think nanotubes should get the prize? One could argue it just seems transitional between buckyballs and graphene, so it would be redundant. While a lot of work using nano-enhanced materials does now focus on graphene, a great deal of this is based on previous experiments using carbon nanotubes, so the transition was scientifically important. And nanotubes still have some unique properties. The shape of a nanotube immediately brings lot of interesting applications to mind that wouldn’t come up for flat graphene or the spherical buckyballs: nano-wires, nano “test tubes”, nano pipes, nanomotors, nano-scaffolds, and more.  (Also, when describing nanotubes, it’s incredibly easy to be able to say it’s like nanometer-sized carbon fiber, but I realize that ease of generating one sentence descriptions is typically not a criterion for Nobel consideration.) The combination of these factors make nanotubes incredibly important in the history of nanotechnology and helped it transition into the critical field it is today.

There Are Probably No Nanoparticles in Your Food… At Least, Not Intentionally

Recently, Mother Jones posted an article about “Big Dairy” putting microscopic pieces of metal in food. Their main source is the Project on Emerging Nanotechnologies and its Consumer Products Inventory, a collaboration between the Wilson Center and Virginia Tech. Unfortunately, the Mother Jones piece seems to misunderstand how the CPI is meant to be used. But another problem is that the CPI itself seems poorly designed as a tool for journalists.

So what’s the issue? The Mother Jones piece mainly focuses on the alleged use of nanoparticles of titanium dioxide (TiO2) in certain foods to enhance colors, making whites whiter or brightening other colors. First, the piece makes an error in its description of TiO2 as a “microscopic piece of metal”. Titanium is a metal, but metal oxides are not, unless you consider rust a metal (which would also be wrong). But another issue is “microscopic”. Just because something is microscopic, which generally means smaller than your eye can see, doesn’t mean it’s a nanomaterial. The smallest thing you can see at a normal reading distance is about a tenth of a millimeter, which is 1000 times bigger than the 100 nanometer cut-off we typically use to talk about nanoparticles.

A clear glass dish holds a bright white powder.

Titanium dioxide is a vivid white pigment, even as macroscopic particles.

And that’s what confuses me most here. As you can see above, titanium dioxide is white as a powder, but in that form it’s several hundred nanometers wide at minimum, if not on the scale of microns (1000 nanometers). In fact, nanoparticles of TiO2 are too small to scatter visible light and so they can’t appear white. A friend reminded me how sunscreens have switched from large TiO2 particles to actual nanoparticles precisely because it helps the sunscreen go on clearer. I’m not naive enough to think food companies wouldn’t try to cut a buck to help improve and standardize appearances, but I also don’t think food scientists are dumb enough to pay for a version of a material they can’t fulfill the purpose they’re adding it for. So TiO2 is probably used in some foods, but not on a nanoscale that radically changes it’s health properties.

But I don’t entirely blame Mother Jones. The thing is, the main reason I had a hunch the article seemed wrong is because one of my labmates at UVA has been working with TiO2 nanotubes for the last three years, and I’ve seen his samples. If I didn’t know that, and I just saw PEN include TiO2 on its list of nano additives, I would be inclined to believe it. PEN saw the Mother Jones piece and another similar article and responded by pointing out that the inventory categorized their inclusion of TiO2 in the products as having low confidence it was actually used. But their source is an environmental science paper including actual chemical analyses of food grade TiO2, so why do they give that low confidence? Also, PEN claims the CPI is something the public can use to monitor nanotechnology in products, so maybe they should rethink how confident they are in their analysis if they want to keep selling it that way.

The paper CPI references in the TiO2 claim is interesting too. That paper actually shows that most of the TiO2 is around 100 nm (figure below). But like I said, that’s kind of pushing the limit on how small the particles can be and still look white. It might be that the authors stumbled across a weird batch, as they note that in liquid products containing TiO2, less than 5% of the TiO2 could go through filters with pores that were 450 nanometers wide. Does the current process used to make food grade TiO2 end up making a lot of particles that are actually smaller than needed? Or maybe larger particles are breaking down into the smaller particles that Weir sees while in storage. This probably does need more research if other groups can replicate these results.

A histogram showing the distribution of particle sizes of TiO2. Categories go from 40 nanometers to 220 nanometers in intervals of 10. The greatest number of particles have diameters of 90-100 or 100-110 nanometers.

Distribution of TiO2 particle sizes in food grade TiO2. From Weir et al, http://pubs.acs.org/doi/abs/10.1021/es204168d?journalCode=esthag

2D Tin May be the New Graphene

Get ready physics fashion watchers (I hope you exist), because we’ve got a new exotic material to play with: stanene! If you know the roots of the element symbols, you may already know what this material is: a 2D sheet of tin. (In Latin, tin is stannum, and many words about tin compounds were derived from this. And it’s also why tin’s symbol is Sn.) The -ene comes from naming it in analogy to graphene. (And hopefully the name won’t get confused with the class of tin-containing organic compounds called stannenes that I found by adding an extra n in my Google search)

What makes stanene special? It is a special kind of material called a “topological insulator”. A pretty advanced discussion of topological insulators and the origin of their behavior can be found here. The main thing to know about topological insulators is that their name is a bit confusing; they can actually be great conductors of electricity.The bulk of a topological insulator is insulating, but at the surface or edges, electricity can flow.  (Three semesters of materials science is teaching me that surfaces are magical places.) Recent work has even managed to image that current and show it does indeed only flow at the edges.

Many of the topological insulators seem to have other special properties as well. Mercury telluride is a topological insulator that transports electrons in different electrons based on their spin, which could be very useful for spintronics and quantum computers. At its edges, stanene is a perfect conductor (although notably they don’t use the term superconductor), which means it doesn’t heat up as current flows through it. That waste heat represents lost energy. If you’ve ever felt a hot computer or smartphone, you can guess that a decent portion of the energy we use to power our electronics ends up heating them as well. This means stanene circuit components should need less power.

I originally thought graphene was also superconductive, but you need to modify it pretty specifically, so stanene may beat it out for electronics applications. What’s particularly important about stanene is the temperature at which the perfect conductivity is observed. Pure stanene still shows perfect conductivity at room temperature, which isn’t true for the bulk superconductors or other topological insulators we’ve currently made (even graphene). And adding fluorine to the stanene keeps it perfectly conductive  up to the boiling point of water! This means it could be used in nearly any consumer electronic.

Considering there is already talk of graphene being overhyped, some may wonder if a breakthrough like this could be the first nail in a hypothetical science coffin. It may be. I think this is going to be the start of a trend of many fantastic-seeming 2D materials. We’ve only recently discovered how to make graphene, and we’re just beginning to make other 2D materials (stanene being one of them). Part of the wonder of graphene came from decades of theoretical work preceding its creation in a lab; we knew a lot of the interesting properties it should have. But there isn’t as much history for many other 2D materials. And it also just seems like there’s a lot of special properties that come from materials only having 2D hexagonal structures. We are just beginning to come to a broader understanding of 2D materials, and I think this means we’re going to discover lots of plain materials have interesting properties in 2D along the way. It’s an exciting time.

Atoms in the Spotlight

We can now settle that your high school chemistry teacher did not lie to you when talking about atomic structure and (perhaps if you were lucky enough to get that far) molecular orbitals. Recently, scientists have been able to observe the actual electron orbitals of a hydrogen atom  and see the rearrangement of atoms in chemical reaction.

Being able to observe the different orbitals represents a new sort of “resolution” record in physics. Atomic force microscopy (AFM) can show you individual atoms and even the atomic bonds between atoms, but that is probably hitting the absolute limit of what we can do with AFM techniques. Why? Because AFM is basically like poking something with an atom and you can’t resolve features that are much smaller than whatever you’re poking with. And since the electron cloud basically makes up the whole force that prevents atoms from overlapping, AFM could never look at the orbitals in a detailed way.

So how did this impressively multinational team (seriously, there are universities from five different countries) do it? Think of an atomic scale projector, or maybe like an old cathode ray TV.  In the original paper, it’s called a “photoionization microscope”. Ionization is giving an electron so much energy it leaves the atom it is attached to, photoionization is doing this process with light. The figure below shows a schematic of the experiment.

The experimental set-up of Stodolna, et al. Source: Physical Review Letters

The experimental set-up of Stodolna, et al, to view hydrogen orbitals. Source: Physical Review Letters

Continue reading

Thomas Edison Strikes Again

Chemists at Stanford have helped bring a battery designed by Thomas Edison into the modern age.  Like us, Edison was also interested in electric cars, and in 1901 he developed a iron-nickel battery.  In a case of buzzwords being right for a reason, the Stanford team used the same elements as Edison, but structured them on the nanoscale.  Edison’s original design sounds like it was essentially just one alloy of iron and carbon for one electrode and one of nickel and carbon for the other electrode.  The new battery consisted of small iron pieces grown on top of graphene (that wonderful form of carbon we’ve talked about before) for the first electrode and small nickel regions grown on top of “tubes” of carbon (which probably means nanotubes).

The new battery is 1000 times more efficient than traditional nickel-iron batteries, but the improvement means it only now is about equal to the energy storage and discharge abilities of our modern lithium ion batteries.  Although there’s lot of research being done on improving our lithium ion batteries, there are some unique advantages to the nickel-iron batteries.  For one, there’s a lot more iron and nickel than lithium, meaning the batteries could be cheaper.  Nickel-iron batteries also don’t contain any flammable materials, while lithium batteries are capable of exploding.  While the nickel-iron batteries might not appear everywhere, their inability to explode could be a boon to electric car manufacturers.