Weirdly Specific Questions I Want Answers to in Meta-science, part 1

Using “meta-science” as a somewhat expansive term for history, philosophy, and sociology of science. And using my blog as a place to write about something besides the physical chemistry of carbon nanomaterials in various liquids.

  • To what extent is sloppy/misleading terminology an attempt to cash in on buzzwords? Clearly, we know that motive exists – there aren’t two major papers trying to narrow down precise definitions of graphene-related terms for nothing. But as the papers also suggest, at what point is it a legitimate debate in the community about setting a definition? “Graphene” was a term that described a useful theoretical construct for decades before anyone ever thought someone could make a real sheet of it, so maybe it isn’t unreasonable that people started using to describe a variety of physical things related to the original idea.
    • This contains a sort of follow-up: What properties do people use in clarifying these definitions and how much does it vary by background? Personally, I would say I’m way closer to the ideal of “graphene” than lots of people working with more extensively chemically modified graphene derivatives and am fine with using it for almost anything that’s nearly all sp2 carbon with about 10 layers or less. But would a physicist who cares more about the electronic properties, and which vary a lot based on the number of layers even in the lower limit, consider that maddening?
  • Nanoscience is very interdisciplinary/transdisciplinary, but individual researchers can be quite grounded in just one field. How much work is being done where researchers are missing basic knowledge of another field their work is now straddling?
    • For instance, when reading up on polymer nanocomposites, it seems noted by lots of people with extensive polymer science backgrounds that there are many papers that don’t refer to basic aspects of polymer physics. My hunch is that a lot of this comes from the fact that many people in this field started working on the nanoparticles they want to incorporate into the composites and then moved into the composites. They may have backgrounds more in fields like solid-state physics, electrical engineering, or (inorganic/metallic/ceramic) materials science, where they would have been less likely to deal with polymer theory.
    • Similarly, it was noted in one paper I read that a lot of talk about solutions of nanoparticles probably would be more precise if the discussion was framed in terminology of colloids and dispersions.
51cb2b3noc-l-_sx346_bo1204203200_

Oh my gosh, I made fun of the subtitle for like two years, but it’s true

  • Is the ontological status of defects in nanoscience distinct from their treatment in bulk studies of materials? This is a bit related to the first question in that some definitions would preclude the existence of some defects in the referent material/structure.
    • On the other hand, does this stricter treatment make more sense in the few atom limit of many nanomaterials? Chemists can literally specify the type and location of every atom in successful products of well-studied cluster reactions, though these are even pushing the term “nano”.
    • Is this a reflection of applications of defects at the different scales? (More philosophically worded, are defects treated differently because of their teleological nature?) At the bulk level, we work to engineer the nature of defects to help develop the properties we want. At the nanoscale, some structures can basically be ruined for certain applications by the mislocation of a single atom. Is this also a reflection of the current practical process of needing to scale up the ability to make nanomaterials? E.g. as more realistic approaches to large-scale nanotech fabrication are developed, will the practical treatment of defects in nanomaterials converge to that of how we treat defects in the bulk?

Carbon is Dead. Long Live Carbon?

Last month, the National Nanotechnology Initiative released a report on the state of commercial development of carbon nanotubes. And that state is mostly negative. (Which pains me, because I still love them.) If you’re not familiar with carbon nanotubes, you might know of their close relative, graphene, which has been in the news much more since the Nobel Prize awarded in 2010 for its discovery. Graphene is essentially a single layer of the carbon atoms found in graphite. A carbon nanotube can be thought of as rolling up a sheet of graphene into a cylinder.

CNT as rolled-up graphene.jpg

Visualizing a single-walled (SW) carbon nanotube (CNT) as the result of rolling up a sheet of graphene.

If you want to use carbon nanotubes, there are a lot of properties you need to consider. Nearly 25 years after their discovery, we’re still working on controlling a lot of these properties, which are closely tied to how we make the nanobues.

Carbon nanotubes have six major characteristics to consider when you want to use them:

  • How many “walls” does a nanotube have? We often talk about the single-walled nanotubes you see in the picture above, because their properties are the most impressive. However, it is much easier to make large quantities of nanotubes with multiple walls than single walls.
  • Size. For nanotubes, several things come in play here.
    • The diameter of the nanotubes is often related to chirality, another important aspect of nanotubes, and can affect both mechanical and electrical properties.
    • The length is also very important, especially if you want to incorporate the nanotubes into other materials or if you want to directly use nanotubes as a structural material themselves. For instance, if you want to add nanotubes to another material to make it more conductive, you want them to be long enough to routinely touch each other and carry charge through the entire material. Or if you want that oft-discussed nanotube space elevator, you need really long nanotubes, because stringing a bunch of short nanotubes together results in a weak material.
    • And the aspect ratio of length to width is important for materials when you use them in structures.
  • Chirality, which can basically be thought of as the curviness of how you roll up the graphene to get a nanotube (see the image below). If you think of rolling up a sheet of paper, you can roll it leaving the ends matched up, or you can roll it an angle. Chirality is incredibly important in determing the way electricity behaves in nanotubes, and whether a nanotube behaves like a metal or like a semiconductor (like the silicon in your computer chips). It also turns out that the chirality of nanotubes is related to how they grow when you make them.
  • Defects. Any material is always going to have some deviation from an “ideal” structure. In the case of the carbon nanotubes, it can be missing or have extra carbon atoms that replace a few of the hexagons of the structure with pentagons or heptagons. Or impurity atoms like oxygen may end up incorporated into the nanotube. Defects aren’t necessarily bad for all applications. For instance if you want to stick a nanotube in a plastic, defects can actually help it incorporate better. But electronics typically need nanotubes of the highest purity.

A plane of hexagons is shown in the top left. Overlaid on the plan are arrows representing vectors. On the top right is a nanotube labeled (10, 0) zig-zag. On the bottom left is a larger (10, 10) armchair nanotube. On the bottom right is a larger (10, 7) chiral nanotube. Some of the different ways a nanotube can be rolled up. The numbers in parentheses are the “chiral vector” of the nanotube and determine its diameter and electronic properties.

Currently, the methods we have to make large amounts of CNTs result in a mix of ones with different chiralities, if not also different sizes. (We have gotten much better at controlling diameter over the last several years.) For mechanical applications, the former isn’t much of a problem. But if you have a bunch of CNTs of different conductivities, it’s hard to use them consistently for electronics.

But maybe carbon nanotubes were always doomed once we discovered graphene. Working from the idea of a CNT as a rolled-up graphene sheet, you may realize that means there are  way more factors that can be varied in a CNT than a single flat flake of graphene. When working with graphene, there are just three main factors to consider:

  • Number of layers. This is similar to the number of walls of a nanotube. Scientists and engineers are generally most excited about single-layer graphene (which is technically the “true” graphene). The electronic properties change dramatically with the number of layers, and somewhere between 10 and 100 layers, you’re not that different from graphite. Again, the methods that produce the most graphene produce multi-layer graphene. But all the graphene made in a single batch will generally have consistent electronic properties.
  • Size. This is typically just one parameter, since most methods to make graphene result in roughly circular, square, or other equally shaped patches. Also, graphene’s properties are less affected by size than CNTs.
  • Defects. This tends to be pretty similar to what we see in CNTs, though in graphene there’s a major question of whether you can use an oxidized form or need the pure graphene for your application, because many production methods make the former first.

Single-layer graphene also has the added quirk of its electrical properties being greatly affected by whatever lies beneath it. However, that may be less of an issue for commercial applications, since whatever substrate is chosen for a given application will consistently affect all graphene on it. In a world where can now make graphene in blenders or just fire up any carbon source ranging from Girl Scout cookies to dead bugs and let it deposit on a metal surface, it can be hard for nanotubes to sustain their appeal when growing them requires additional steps of catalyst production and placement.

But perhaps we’re just leaving a devil we know for a more hyped devil we don’t. Near the end of last year, The New Yorker had a great article on the promises we’re making for graphene, the ones we made for nanotubes, and about technical change in general, which points out that we’re still years away from widespread adoption of either material for any purpose. In the meantime, we’re probably going to keep discovering other interesting nanomaterials, and just like people couldn’t believe we got graphene from sticky tape, we’ll probably be surprised by whatever comes next.

A Nobel for Nanotubes?

A popular pastime on the science blogosphere is doing Nobel predictions; educated guesses on who you think may win a Nobel prize in the various science categories (physics, chemistry, and physiology or medicine). I don’t feel like I know enough to really do detailed predictions, but I did make one. Okay, more of a dream than a prediction. But I feel justified because Slate also seemed to vouch for it. What was it? I think a Nobel Prize in Physics should be awarded for the discovery and study of carbon nanotubes.

One potential issue with awarding a prize for carbon nanotube work could be priority. Nobel prizes can only be split between three people. While Iijima is generally recognized as the first to discover carbon nanotubes, it actually seems that they have really been discovered multiple times (in fact, Iijima appears to have imaged a carbon nanotube in his thesis nearly 15 years before what is typically considered his “discovery”). It’s just that Iijima’s announcement happened to be at a time and place where the concept of a nanometer-sized cylinder of carbon atoms could both be well understood and greatly appreciated as a major focus of study. The paper linked to points out that many of the earlier studies that probably found nanotubes were mainly motivated by PREVENTING  their growth because they were linked to defects and failures in other processes. The committee could limit this by awarding the prize for the discovery of single-walled nanotubes, which brings the field of potential awardees down to Iijima and one of his colleagues and a competing group at IBM in California. This would also work because a great deal of the hype of carbon nanotubes is focused on single-walled tubes because they generally have superior properties than their multi-walled siblings and theory focuses on them.

No matter what, I would say Mildred Dresselhaus should be included in any potential nanotube prize because she has been one of the most important contributors to the theoretical understanding of carbon nanotubes since the beginning. She’s also done a lot of theoretical work on graphene, but the prize for graphene was more experimental because while theorists have been describing graphene since at least the 80s (Dresselhaus even has a special section in that same issue), no one had anything pure to work with until Geim and Novoselov started their experiments.

In 1996, another form of carbon was also recognized with the Nobel Prize in Chemistry. Rick Smalley, Robert Curl, and Harold Kroto won the prize for their discovery of buckminsterfullerene (or “buckyballs”) in 1985 and further work they did with other fullerenes and being able to the prove these did have ball-like structures. So while the prize for graphene recognized unique experimental work that could finally test theory, this prize was for an experimental result no one was expecting.   Pure carbon has been known to exist as a pure element in two forms, diamond and graphite, for a long time and no one was expecting to find another stable form. Fullerenes opened people’s minds to nanostructures and served as a practical base for the start of much nanotechnology research, which was very much in vogue after Drexler’s discussions in the 80s.

Six diagrams are shown, in two rows of three. Top left shows atoms arranged in hexagonal sheets, which are then layered on top of each other. This is graphite.

Six phases of carbon. Graphite and diamond are the two common phases we encounter in normal conditions.

So why do I think nanotubes should get the prize? One could argue it just seems transitional between buckyballs and graphene, so it would be redundant. While a lot of work using nano-enhanced materials does now focus on graphene, a great deal of this is based on previous experiments using carbon nanotubes, so the transition was scientifically important. And nanotubes still have some unique properties. The shape of a nanotube immediately brings lot of interesting applications to mind that wouldn’t come up for flat graphene or the spherical buckyballs: nano-wires, nano “test tubes”, nano pipes, nanomotors, nano-scaffolds, and more.  (Also, when describing nanotubes, it’s incredibly easy to be able to say it’s like nanometer-sized carbon fiber, but I realize that ease of generating one sentence descriptions is typically not a criterion for Nobel consideration.) The combination of these factors make nanotubes incredibly important in the history of nanotechnology and helped it transition into the critical field it is today.

There Are Probably No Nanoparticles in Your Food… At Least, Not Intentionally

Recently, Mother Jones posted an article about “Big Dairy” putting microscopic pieces of metal in food. Their main source is the Project on Emerging Nanotechnologies and its Consumer Products Inventory, a collaboration between the Wilson Center and Virginia Tech. Unfortunately, the Mother Jones piece seems to misunderstand how the CPI is meant to be used. But another problem is that the CPI itself seems poorly designed as a tool for journalists.

So what’s the issue? The Mother Jones piece mainly focuses on the alleged use of nanoparticles of titanium dioxide (TiO2) in certain foods to enhance colors, making whites whiter or brightening other colors. First, the piece makes an error in its description of TiO2 as a “microscopic piece of metal”. Titanium is a metal, but metal oxides are not, unless you consider rust a metal (which would also be wrong). But another issue is “microscopic”. Just because something is microscopic, which generally means smaller than your eye can see, doesn’t mean it’s a nanomaterial. The smallest thing you can see at a normal reading distance is about a tenth of a millimeter, which is 1000 times bigger than the 100 nanometer cut-off we typically use to talk about nanoparticles.

A clear glass dish holds a bright white powder.

Titanium dioxide is a vivid white pigment, even as macroscopic particles.

And that’s what confuses me most here. As you can see above, titanium dioxide is white as a powder, but in that form it’s several hundred nanometers wide at minimum, if not on the scale of microns (1000 nanometers). In fact, nanoparticles of TiO2 are too small to scatter visible light and so they can’t appear white. A friend reminded me how sunscreens have switched from large TiO2 particles to actual nanoparticles precisely because it helps the sunscreen go on clearer. I’m not naive enough to think food companies wouldn’t try to cut a buck to help improve and standardize appearances, but I also don’t think food scientists are dumb enough to pay for a version of a material they can’t fulfill the purpose they’re adding it for. So TiO2 is probably used in some foods, but not on a nanoscale that radically changes it’s health properties.

But I don’t entirely blame Mother Jones. The thing is, the main reason I had a hunch the article seemed wrong is because one of my labmates at UVA has been working with TiO2 nanotubes for the last three years, and I’ve seen his samples. If I didn’t know that, and I just saw PEN include TiO2 on its list of nano additives, I would be inclined to believe it. PEN saw the Mother Jones piece and another similar article and responded by pointing out that the inventory categorized their inclusion of TiO2 in the products as having low confidence it was actually used. But their source is an environmental science paper including actual chemical analyses of food grade TiO2, so why do they give that low confidence? Also, PEN claims the CPI is something the public can use to monitor nanotechnology in products, so maybe they should rethink how confident they are in their analysis if they want to keep selling it that way.

The paper CPI references in the TiO2 claim is interesting too. That paper actually shows that most of the TiO2 is around 100 nm (figure below). But like I said, that’s kind of pushing the limit on how small the particles can be and still look white. It might be that the authors stumbled across a weird batch, as they note that in liquid products containing TiO2, less than 5% of the TiO2 could go through filters with pores that were 450 nanometers wide. Does the current process used to make food grade TiO2 end up making a lot of particles that are actually smaller than needed? Or maybe larger particles are breaking down into the smaller particles that Weir sees while in storage. This probably does need more research if other groups can replicate these results.

A histogram showing the distribution of particle sizes of TiO2. Categories go from 40 nanometers to 220 nanometers in intervals of 10. The greatest number of particles have diameters of 90-100 or 100-110 nanometers.

Distribution of TiO2 particle sizes in food grade TiO2. From Weir et al, http://pubs.acs.org/doi/abs/10.1021/es204168d?journalCode=esthag

2D Tin May be the New Graphene

Get ready physics fashion watchers (I hope you exist), because we’ve got a new exotic material to play with: stanene! If you know the roots of the element symbols, you may already know what this material is: a 2D sheet of tin. (In Latin, tin is stannum, and many words about tin compounds were derived from this. And it’s also why tin’s symbol is Sn.) The -ene comes from naming it in analogy to graphene. (And hopefully the name won’t get confused with the class of tin-containing organic compounds called stannenes that I found by adding an extra n in my Google search)

What makes stanene special? It is a special kind of material called a “topological insulator”. A pretty advanced discussion of topological insulators and the origin of their behavior can be found here. The main thing to know about topological insulators is that their name is a bit confusing; they can actually be great conductors of electricity.The bulk of a topological insulator is insulating, but at the surface or edges, electricity can flow.  (Three semesters of materials science is teaching me that surfaces are magical places.) Recent work has even managed to image that current and show it does indeed only flow at the edges.

Many of the topological insulators seem to have other special properties as well. Mercury telluride is a topological insulator that transports electrons in different electrons based on their spin, which could be very useful for spintronics and quantum computers. At its edges, stanene is a perfect conductor (although notably they don’t use the term superconductor), which means it doesn’t heat up as current flows through it. That waste heat represents lost energy. If you’ve ever felt a hot computer or smartphone, you can guess that a decent portion of the energy we use to power our electronics ends up heating them as well. This means stanene circuit components should need less power.

I originally thought graphene was also superconductive, but you need to modify it pretty specifically, so stanene may beat it out for electronics applications. What’s particularly important about stanene is the temperature at which the perfect conductivity is observed. Pure stanene still shows perfect conductivity at room temperature, which isn’t true for the bulk superconductors or other topological insulators we’ve currently made (even graphene). And adding fluorine to the stanene keeps it perfectly conductive  up to the boiling point of water! This means it could be used in nearly any consumer electronic.

Considering there is already talk of graphene being overhyped, some may wonder if a breakthrough like this could be the first nail in a hypothetical science coffin. It may be. I think this is going to be the start of a trend of many fantastic-seeming 2D materials. We’ve only recently discovered how to make graphene, and we’re just beginning to make other 2D materials (stanene being one of them). Part of the wonder of graphene came from decades of theoretical work preceding its creation in a lab; we knew a lot of the interesting properties it should have. But there isn’t as much history for many other 2D materials. And it also just seems like there’s a lot of special properties that come from materials only having 2D hexagonal structures. We are just beginning to come to a broader understanding of 2D materials, and I think this means we’re going to discover lots of plain materials have interesting properties in 2D along the way. It’s an exciting time.

Atoms in the Spotlight

We can now settle that your high school chemistry teacher did not lie to you when talking about atomic structure and (perhaps if you were lucky enough to get that far) molecular orbitals. Recently, scientists have been able to observe the actual electron orbitals of a hydrogen atom  and see the rearrangement of atoms in chemical reaction.

Being able to observe the different orbitals represents a new sort of “resolution” record in physics. Atomic force microscopy (AFM) can show you individual atoms and even the atomic bonds between atoms, but that is probably hitting the absolute limit of what we can do with AFM techniques. Why? Because AFM is basically like poking something with an atom and you can’t resolve features that are much smaller than whatever you’re poking with. And since the electron cloud basically makes up the whole force that prevents atoms from overlapping, AFM could never look at the orbitals in a detailed way.

So how did this impressively multinational team (seriously, there are universities from five different countries) do it? Think of an atomic scale projector, or maybe like an old cathode ray TV.  In the original paper, it’s called a “photoionization microscope”. Ionization is giving an electron so much energy it leaves the atom it is attached to, photoionization is doing this process with light. The figure below shows a schematic of the experiment.

The experimental set-up of Stodolna, et al. Source: Physical Review Letters

The experimental set-up of Stodolna, et al, to view hydrogen orbitals. Source: Physical Review Letters

Continue reading

Thomas Edison Strikes Again

Chemists at Stanford have helped bring a battery designed by Thomas Edison into the modern age.  Like us, Edison was also interested in electric cars, and in 1901 he developed a iron-nickel battery.  In a case of buzzwords being right for a reason, the Stanford team used the same elements as Edison, but structured them on the nanoscale.  Edison’s original design sounds like it was essentially just one alloy of iron and carbon for one electrode and one of nickel and carbon for the other electrode.  The new battery consisted of small iron pieces grown on top of graphene (that wonderful form of carbon we’ve talked about before) for the first electrode and small nickel regions grown on top of “tubes” of carbon (which probably means nanotubes).

The new battery is 1000 times more efficient than traditional nickel-iron batteries, but the improvement means it only now is about equal to the energy storage and discharge abilities of our modern lithium ion batteries.  Although there’s lot of research being done on improving our lithium ion batteries, there are some unique advantages to the nickel-iron batteries.  For one, there’s a lot more iron and nickel than lithium, meaning the batteries could be cheaper.  Nickel-iron batteries also don’t contain any flammable materials, while lithium batteries are capable of exploding.  While the nickel-iron batteries might not appear everywhere, their inability to explode could be a boon to electric car manufacturers.