Reclaiming Science as a Liberal Art

What do you think of when someone talks about the liberal arts? Many of you probably think of subjects like English and literature, history, classics, and philosophy. Those are all a good start for a liberal education, but those are only fields in the humanities. Perhaps you think of the social sciences, to help you understand the institutions and actors in our culture; fields like psychology, sociology, or economics. What about subjects like physics, biology, chemistry, or astronomy? Would you ever think of them as belonging to the liberal arts, or would you cordon them off into the STEM fields? I would argue that excluding the sciences from the liberal arts is both historically wrong and harms society.

First, let’s look at the original conception of the liberal arts. Your study would begin with the trivium, the three subjects of grammar, logic, and rhetoric. The trivium has been described as a progression of study into argument. Grammar is concerned with how things are symbolized. Logic is concerned with how things are understood. Rhetoric is concerned with how things are effectively communicated, because what good is it to understand things if you cannot properly share your understanding to other learned people? With its focus on language, the trivium does fit the common stereotype of the liberal arts as a humanistic writing education.

But it is important to understand that the trivium was considered only the beginning of a liberal arts education. It was followed by the supposedly more “serious” quadrivium of arithmetic, geometry, music, and astronomy. The quadrivium is focused on number and can also be viewed as a progression. Arithmetic teaches you about pure numbers. Geometry looks at number to describe space. Music, as it was taught in the quadrivium, focused on the ratios that produce notes and the description of notes in time. Astronomy comes last, as it builds on this knowledge to understand the mathematical patterns in space and time of bodies in the heavens. Only after completing the quadrivium, when one would have a knowledge of both language and numbers, would a student move on to philosophy or theology, the “queen of the liberal arts”.

7 Liberal Arts

The seven liberal arts surrounding philosophy.

Although this progression might seem strange to some, it makes a lot of sense when you consider that science developed out of “natural philosophy”. Understanding what data and observations mean, whether they are from a normal experiment or “big data”, is a philosophical activity. As my professors say, running an experiment without an understanding of what I was measured makes me a technician, not a scientist. Or consider alchemists, who included many great experimentalists who developed some important chemical insights, but are typically excluded from our conception of science because they worked with different philosophical assumptions. The findings of modern science also tie into major questions that define philosophy. What does it say about our place in the universe if there are 10 billion planets like Earth in our galaxy, or when we are connected to all other living things on Earth through chemistry and evolution?

We get the term liberal arts from Latin, artes liberales, the arts or skills that are befitting of a free person. The children of the privileged would pursue those fields. This was in contrast to the mechanical arts – fields like clothesmaking, agriculture, architecture, martial arts, trade, cooking, and metalworking. The mechanical arts were a decent way for someone without status to make a living, but still considered servile and unbecoming of a free (read “noble”) person. This distinction breaks down in modern life because we are no longer that elitist in our approach to liberal education. We think everyone should be “free”, not just an established elite.

More importantly, in a liberal democracy, we think everyone should have some say in how they are governed. Many major issues in modern society relate to scientific understanding and knowledge. To talk about vaccines, you need to have some understanding of the immune system. The discussion over chemicals is very different when you know that we are made up chemicals. It is hard to understand what is at stake in climate change without a knowledge of how Earth’s various geological and environmental systems work and it is hard to evaluate solutions if you don’t know where energy comes from. Or how can we talk about surveillance without understanding how information is obtained and how it is distributed? The Founding Fathers say they had to study politics and war to win freedom for their new nation. As part of a liberal education, Americans today need to learn to science in order to keep theirs.

(Note: This post is based off a speech I gave as part of a contest at UVA. It reflects a view I think is often unconsidered in education discussions, so I wanted to adapt it into a blog post.

As another aside, it’s incredibly interesting people now tend to unambiguously think of social sciences as part of the liberal arts while wavering more on the natural sciences since the idea of a “social” science wasn’t really developed until well after the conception of the liberal arts.)

Why Can’t You Reach the Speed of Light?

A friend from high school had a good question that I wanted to share:
I have a science question!!! Why can’t we travel the speed of light? We know what it is, and that its constant. We’ve even seen footage of it moving along a path (it was a video clip I saw somewhere [Edit to add: there are now two different experiments that have done this. One that requires multiple repeats of the light pulse and a newer technique that can work with just one). So, what is keeping us from moving at that speed? Is it simply an issue of materials not being able to withstand those speeds, or is it that we can’t even propel ourselves or any object fast enough to reach those speeds? And if its the latter, is it an issue of available space/distance required is unattainable, or is it an issue of the payload needed to propel us is simply too high to calculate/unfeasable (is that even a word?) for the project? Does my question even make sense? I got a strange look when I asked someone else…
 This question makes a lot of sense actually, because when we talk about space travel, people often use light-years to discuss vast distances involved and point out how slow our own methods are in comparison. But it actually turns out the road block is fundamental, not just practical. We can’t reach the speed of light, at least in our current understanding of physics, because relativity says this is impossible.

To put it simply, anything with mass can’t reach the speed of light. This is because E=mc2 works in both directions. This equation means that the energy of something is its mass times the speed of light squared. In chemistry (or a more advanced physics class), you may have talked about the mass defect of some radioactive compounds. The mass defect is the difference in mass before and after certain nuclear reactions, which was actually converted into energy. (This energy is what is exploited in nuclear power and nuclear weapons. Multiplying by the speed of light square means even a little mass equals a lot of energy. The Little Boy bomb dropped on Hiroshima had 140 pounds of uranium, and no more than two pounds of that are believed to have undergone fission to produce the nearly 16 kiloton blast.)

But it also turns out that as something with mass goes faster, its kinetic energy also turns into extra mass. This “relativistic mass” greatly increases as you approach the speed of light. So the faster something gets, the heavier it becomes and the more energy you need to accelerate it. It’s worth pointing out that the accelerating object hasn’t actually gained material – if your spaceship was initially say 20 moles of unobtanium, it is still 20 moles of material even at 99% the speed of light. Instead, the increase in “mass” is due to the geometry of spacetime as the object moves through it. In fact, this is why some physicists don’t like using the term “relativistic mass” and would prefer to focus on the relativistic descriptions of energy and momentum. What’s also really interesting is that the math underlying this in special relativity also implies that anything that doesn’t have mass HAS to travel at the speed of light.

A graph with X-axis showing speed relative to light and Y-axis showing energy. A line representing the kinetic energy the object expoentially increases it approach light speed.

The kinetic energy of a 1 kg object at various fractions of the speed of light. For reference, 10^18 J is about a tenth of United States’ annual electrical energy consumption.

The graph above represents  the (relativistically corrected) kinetic energy of an 1 kilogram (2.2 pound) object at different speeds. You can basically think of it as representing how much energy you need to impart into the object to reach that speed. In the graph, I started at one ten thousandth the speed of light, which is about twice the speed the New Horizons probe was launched at. I ended it at 99.99% of the speed of light. Just to get to 99.999% of the speed of light would have brought the maximum up another order of magnitude.

Quick Thoughts on Diversity in Physics

Earlier this month, during oral arguments for Fisher v. University of Texas, Chief Justice John Roberts asked what perspective an African-American student would offer in physics classrooms. The group Equity and Inclusion in Physics and Astronomy has written an open letter about why this line of questioning may miss the point about diversity in the classroom. But it also seems worth pointing out why culture does matter in physics (and science more broadly).

So nature is nature and people can develop theoretical understanding of it anywhere and it should be similar (I think. This is actually glossing over what I imagine is a deep philosophy of science question.) But nature is also incredibly vast. People approach studies of nature in ways that can reflect their culture. Someone may choose to study a phenomenon because it is one they see often in their lives. Or they may develop an analogy between theory and some aspect of culture that helps them better understand a concept. You can’t wax philosphical about Kekule thinking of ouroboros when he was studying the structure of benzene without admitting that culture has some influence on how people approach science. There are literally entire books and articles about Einstein and Poincare being influenced by sociotechnical issues of late 19th/early 20th century Europe as they developed concepts that would lead to Einstein’s theories of relativity. A physics community that is a monoculture then misses out on other influences and perspectives. So yes, physics should be diverse, and more importantly, physics should be welcoming to all kinds of people.

It’s also worth pointing out this becomes immensely important in engineering and technology, where the problems people choose to study are often immensely influenced by their life experiences. For instance, I have heard people say that India does a great deal of research on speech recognition as a user interface because India still has a large population that cannot read or write, and even then, they may not all use the same language.

Thoughts on Basic Science and Innovation

Recently, science writer and House of Lords member Matt Ridley wrote an essay in The Wall Street Journal about the myth of basic science leading to technological development. Many people have criticized it, and it even seems like Ridely has walked back some claims. One engaging take can be found here, which includes a great quote that I think helps summarize a lot of the reaction:

I think one reason I had trouble initially parsing this article was that it’s about two things at once. Beyond the topic of technology driving itself, though, Ridley has some controversial things to say about the sources of funding for technological progress, much of it quoted from Terence Kealy, whose book The Economic Laws of Scientific Research has come up here before

But also, it seems like Ridley has a weird conception of the two major examples he cites as overturning the “myth”: steam engines and the structure of the DNA. The issue with steam engines is that we mainly associate them with James Watt, who you memorialize everytime you fret about how many watts all your devices are consuming. Steam engines actually preceded Watt, but the reason we associate them with him is because he greatly improved their efficiency due to his understanding of latent heat, the energy that goes into changing something from one phase into another. (We sort of discussed this before. The graph below helps summarize.) Watt understood latent heat because his colleague and friend Joseph Black, a chemist at the University of Glasgow, discovered it.

A graph with X-axis labelled

The latent heat is the heat that is added to go between phases. In this figure, it is represented by the horizontal line between ice and heating of water and the other horizontal line between heating of water and heating of water vapor.

I don’t know whether or not X-ray crystallography was ever used for industrial purposes in the textiles industry, but it has pretty consistently been used in academia since the basic principles were discovered a century ago. The 1915 Nobel prize in physics was literally about the development of X-ray crystallograpy theory. A crystal structure of a biological molecule was determined by X-ray studies at least in 1923, if not earlier. The idea that DNA crystallography only took off as a popular technique because of spillover from industry is incredibly inaccurate.

Ridley also seems to have a basic assumption: that government has crowded out the private sector as a source of basic research over the past century. It sounds reasonable, and it seems testable as a hypothesis. As a percentage of GDP (which seems like a semi-reasonable metric for concerns about crowding out), federal spending on research and development has generally been on the decline since the 70s, and is now about a 1/3 less than it’s relatively stable levels that decade. If private R&D had been crowded out, a competitor dropping by that much seems like a decent place for some resurgence, especially since the one of the most cited examples of private research, Bell Labs, was still going strong all this time. But instead, Bell cut most of its basic research programs just a few years ago.

Fedederal research spending as a percentage of GDP from the 1976 to 2016 fiscal years. The total shows a slight decrease from 1976 to 1992, a large drop in the 90s, a recovery in 2004, and a drop since 2010.

Federal spending on R&D as a percentage of GDP over time

To be fair, more private philanthropists now seem to be funding academic research. The key word, though, is philanthropist, not commercial, which Ridley refers to a lot throughout the essay. Also, a significant amount of this new private funding is for prizes, but you can only get a prize after you have done work.

There is one major thing to take from Ridley’s essay, though I also think most scientists would admit it too. It’s somewhat ridiculous to try to lay out a clear path from a new fundamental result to a practical application, and if you hear a researcher claim to have one, keep your BS filter high. As The New Yorker has discussed, even results that seems obviously practical have a hard time clearing feasibility hurdles. (Also, maybe it’s just a result of small reference pools, but it seems like a lot of researchers I read are also concerned that research now seems to require some clear “mother of technology” justification.)  Similarly, practical developments may not always be obvious. Neil deGrasse Tyson once pointed out that if you spoke to a physicist in the late 1800s about the best way to quickly heat something, they would probably not describe something resembling a microwave.

Common timeframe estimates of when research should result in a commercially available product, followed by translations suggesting how unrealistic this is. The fourth quarter of next year The project will be canceled in six months. Five years I've solved the interesting research problems. The rest is just business, which is easy, right? Ten years We haven't finished inventing it yet, but when we do, it'll be awesome. 25+ years It has not been conclusively proven impossible. We're not really looking at market applications right now. I like being the only one with a hovercar.

Edit to add: Also, I almost immediately regret using innovation in the title because I barely address it in the post, and there’s probably a great discussion to have about that word choice by Ridley. Apple, almost famously, funds virtually no basic research internally or externally, which I often grumble about. However, I would not hesitate to call Apple an “innovative” company. There are a lot of design choices that can improve products that can be pretty divorced from the physical breakthroughs that made them unique. (Though it is worth pointing out human factors and ergonomics are very active fields of study in our modern, device-filled lives.)

So What is Materials Science (and Engineering)?

So this is my 100th post, and I felt like it should be kind of special. So I want to cover a question I get a lot, and one that’s important to me; what exactly is materials science? My early answer was that “it’s like if physics and chemistry had a really practical baby.” One of my favorite versions is a quote from this article on John Goodenough, one of the key figures in making rechargeable lithium ion batteries: “In hosting such researchers, Goodenough was part of the peculiar world of materials scientists, who at their best combine the intuition of physics with the meticulousness of chemistry and pragmatism of engineering”. Which is a much more elegant (and somewhat ego-boositng) way of wording my description. In one of my first classes in graduate school, my professor described materials science as “the study of defects and how to control them to obtain desirable properties”.

A more complete definition is some version of the one that shows up in most introductory lessons: materials science studies the relationship between the structure of a material, its properties, its performance, and the way it was treated. This is often represented as the “materials science tetrahedron”, shown below. Which turns out to be something people really love to use. (You also sometimes see characterization float in the middle, because it applies to all these aspects.)

A tetrahedron with blue points at the vertices. The top is labelled structe, the bottom three are properties, processing, and performance.

The materials science tetrahedron (with characterization floating in the middle).

Those terms may sound meaningless to you, so let’s break them down. In materials science, structure goes beyond that of chemistry: it’s not just what makeup of an atom or molecule that affects a material, but how the atoms/molecules are arranged together in a material has a huge effect on how it behaves. You’re probably familiar with one common example: carbon and its various allotropes. The hardness of diamond is partially attributed to its special crystal structure. Graphite is soft because it is easy to slide the different layers across each other. Another factor is the crystallinity of a material. Not all materials you see are monolithic pieces. Many are made of smaller crystals we call “grains”. The size and arrangement of these grains can be very important. For instance, the silicon in electronics is made in such a way to guarantee it will always be one single crystal because boundaries between grains would ruin its electronic properties. Turbine blades in jet engines and for wind turbines are single crystals, while steels used in structures are polycrystalline.

On the top is a diamond and a piece of graphite. On the bottom are their crystal structures.

Diamond and its crystal structure is on the left; graphite on the right.

Processing is what we do to a material before it ends up being used. This is more than just isolating the compounds you’ll use to make it. In fact, for some materials, processing actually involves adding impurities. Pure silicon wouldn’t be very effective in computers. Instead, silicon is “doped” with phosphorus or boron atoms and the different doping makes it possible to build various electronic components on the same piece. Processing can also determine the structure – temperature and composition can be manipulated to help control the size of grains in a material.

A ring is split into 10 different sections. Going counterclockwise from the top, each segment shows smaller crystals.

The same steel, with different size grains.

Properties and performance are closely related, and the distinction can be subtle (and honestly, it isn’t something we distinguish that much). One idea is that properties describe the essential behavior of a material, while performance reflects how that translates into its use, or the “properties under constraints“. This splits “materials science and engineering” into materials science for focusing on properties and materials engineering for focusing on performance. But that distinction can get blurred pretty quickly, especially if you look at different subfields. Someone who studies mechanical properties might say that corrosion is a performance issue since it limits how long a material could be used at its desired strength. Talk to my colleagues next door in the Center for Electrochemical Science and Engineering and they would almost all certainly consider corrosion to be a property of materials. Regardless, both of these depend on structure and processing. Blades in wind turbines and jet engines are single crystals because this reduces fatigue over time. Structural steels are polycrystals because this makes them stronger.

Now that I’ve thought about it more, I realize the different parts of the tetrahedron explain the different ways we define materials science and engineering. My “materials science as applied physics and chemistry” view reflects the scale of structures we talk about, from atoms that are typically chemistry’s domain to the crystal arrangement to the larger crystal as a whole, where I can talk about mechanics of atoms and grains. The description of Goodenough separates materials science from physics and chemistry through the performance-driven lens of pragmatism. My professor’s focus on defects comes from the processing part of the tetrahedron.

The tetrahedron also helps define the relationship of materials science and engineering to other fields. First, it helps limit what we call a “material”. Our notions of structure and processing are very different from the chemical engineers, and work best on solids. It also helps define limits to the field. Our structures aren’t primarily governed by quantum effects and we generally want defects, so we’re not redundant to solid-state physics. And when we talk about mechanics, we care a lot about the microstructure of the material, and rarely venture into the large continuum of mechanical and civil engineers.

At the same time, the tetrahedron also explains how interdisciplinary materials science is and can be. That makes sense because the tetrahedron developed to help unify materials science. A hundred years ago, “materials science” wouldn’t have meant anything to anyone. People studying metallurgy and ceramics were in their own mostly separate disciplines. The term semiconductor was only coined in a PhD dissertation in 1910, and polymers were still believed to be aggregates of molecules attracted to each other instead of the long chains we know them to be today. The development of crystallography and thermodynamics helped us tie all these together by helping us define structures, where they come from, and how we change them. (Polymers are still a bit weird in many materials science departments, but that’s a post for another day)

Each vertex is also a key branching off point to work with other disciplines. Our idea of structure isn’t redundant to chemistry and physics, but they build off each other. Atomic orbitals help explain why atoms end up in certain crystal structures. Defects end up being important in catalysts. Or we can look at structures that already exist in nature as an inspiration for own designs. One of my professors explained how he once did a project studying turtle shells from an atomic to macroscopic level, justifying it as a way to design stronger materials. Material properties put us in touch with anyone who wants to use our materials to go into their end products, from people designing jet engines to surgeons who want prosthetic implants, and have us talk to physicists and chemists to see how different properties emerge from structures.

This is what attracted me to materials science for graduate school. We can frame our thinking on each vertex , but it’s also expected that we shift. We can think about structures on a multitude of scales. Now I joke that being a bad physics major translates into being great at most of the physics I need to use now. The paradigm helps us approach all materials, not just the ones we personally study. Thinking with different applications in mind forces me to learn new things all the time. (When biomedical engineers sometimes try to claim they’re the “first” interdisciplinary field of engineering to come on the scene, I laugh thinking that they forget materials science has been around for decades. Heck, now I have 20 articles I want to read about the structure of pearl to help with my new research project.) It’s an incredibly exciting field to be in.

Let’s Rethink Science Journalism

There’s been a lot of talk about science journalism after the revelation that a heavily publicized study about chocolate helping weight loss was actually a sham. A great deal of this is meta-commentary about whether or not the whole “sting” was ethical or if it even added much to ongoing discussions on science communication. It’s worth pointing out that science journalism in major outlets could be said to work for the most part, as they didn’t actually report on the study. The ScienceNews piece points out that a Washington Post reporter did want to write up something on the study and dropped it when he became suspicious. HuffPo would be the obvious exception in that they evidently had TWO pieces at one point on the study, but it’s science and health sections have historically been pretty questionable. (The science section has gotten better lately. I don’t know about the health section.)

I’m going to mainly focus on science in general publications, because that’s what most people see. And because science journalism in general publications has a weird organization. The standard treatment seems to be that a science journalist should be able to write on any science topic, regardless of background. That increasingly strikes me as strange. The conceptual difference between, say, astronomy and neuroscience is huge. That’s not to say people can’t be good at covering multiple fields of science. Rachel Feltman at The Washington Post wonderfully covers developments from all over science. But I think we should recognize that this is an incredible talent that not everyone has. (Indeed, going over HuffPo’s recent pieces, it’s notable how many seem to come from actual scientists now compared to what seemed like a never-ending stream of uncredited articles probably coming from anyone with an Internet connection a few years ago.)

A man is shown looking slightly up. Floating above his head are a moon, frog, butterflies, crystals, and some other objects, perhaps representing his thoughts or ideas.

It’s hard to actually have all this in your head.

Pretending that all science writers can cover everything harms science journalism. Where I think this shows up particularly clear is coverage of work done by children. For instance, consider last year’s story about the 12-year-old who supposedly made a major breakthrough about lionfish. Let’s be clear: Lauren did a lot of research for a 12-year-old and contributed a lot to a science lab and we should celebrate that. But so many outlets either exagerrated the claims of her father or took his overly hyped claims too much at face value, because it seems like none of these original reporters had any idea where her project fit in with other research. Similarly, there was the 15-year-old who said to have “invented a way to charge your phone”, but his project was similar to research that has been done for years (but again, Angelo ended up doing a lot of work for his age and seemed to develop a way to make it more effective).

I don’t think there’s a reason why a publication couldn’t cover all its science section by having more specialized journalists who also happened to work outside of science. For example, maybe someone covering physical sciences could also cover engineering and manufacturing firms for business reporting and someone else could be on a combined life sciences/health beat. And someone who can specialize and keep up to date on a smaller area can probably toss out names that better reflect the diversity of the research community instead of just pulling up the same few powerful people who typically get referenced . In fact, probably one of the best trends in science coverage over the last decade has been the proliferation of pieces focusing on social implications of science and also pieces that focus on how science is shaped by society. Reporting like that would benefit from more journalists and communicators who cover things both inside and outside of science and can give voice to diverse groups. And also, it would be great if these pieces actually called on scholars in the sociology, history, and/or philosophy of science and technology to help inform these pieces.

It is an image announcing a panel discussion, entitled

Discussions like this reflect important discussions in society that need to happen in science, too. And they’re at their best when people can understand science and society.

Carbon is Dead. Long Live Carbon?

Last month, the National Nanotechnology Initiative released a report on the state of commercial development of carbon nanotubes. And that state is mostly negative. (Which pains me, because I still love them.) If you’re not familiar with carbon nanotubes, you might know of their close relative, graphene, which has been in the news much more since the Nobel Prize awarded in 2010 for its discovery. Graphene is essentially a single layer of the carbon atoms found in graphite. A carbon nanotube can be thought of as rolling up a sheet of graphene into a cylinder.

CNT as rolled-up graphene.jpg

Visualizing a single-walled (SW) carbon nanotube (CNT) as the result of rolling up a sheet of graphene.

If you want to use carbon nanotubes, there are a lot of properties you need to consider. Nearly 25 years after their discovery, we’re still working on controlling a lot of these properties, which are closely tied to how we make the nanobues.

Carbon nanotubes have six major characteristics to consider when you want to use them:

  • How many “walls” does a nanotube have? We often talk about the single-walled nanotubes you see in the picture above, because their properties are the most impressive. However, it is much easier to make large quantities of nanotubes with multiple walls than single walls.
  • Size. For nanotubes, several things come in play here.
    • The diameter of the nanotubes is often related to chirality, another important aspect of nanotubes, and can affect both mechanical and electrical properties.
    • The length is also very important, especially if you want to incorporate the nanotubes into other materials or if you want to directly use nanotubes as a structural material themselves. For instance, if you want to add nanotubes to another material to make it more conductive, you want them to be long enough to routinely touch each other and carry charge through the entire material. Or if you want that oft-discussed nanotube space elevator, you need really long nanotubes, because stringing a bunch of short nanotubes together results in a weak material.
    • And the aspect ratio of length to width is important for materials when you use them in structures.
  • Chirality, which can basically be thought of as the curviness of how you roll up the graphene to get a nanotube (see the image below). If you think of rolling up a sheet of paper, you can roll it leaving the ends matched up, or you can roll it an angle. Chirality is incredibly important in determing the way electricity behaves in nanotubes, and whether a nanotube behaves like a metal or like a semiconductor (like the silicon in your computer chips). It also turns out that the chirality of nanotubes is related to how they grow when you make them.
  • Defects. Any material is always going to have some deviation from an “ideal” structure. In the case of the carbon nanotubes, it can be missing or have extra carbon atoms that replace a few of the hexagons of the structure with pentagons or heptagons. Or impurity atoms like oxygen may end up incorporated into the nanotube. Defects aren’t necessarily bad for all applications. For instance if you want to stick a nanotube in a plastic, defects can actually help it incorporate better. But electronics typically need nanotubes of the highest purity.

A plane of hexagons is shown in the top left. Overlaid on the plan are arrows representing vectors. On the top right is a nanotube labeled (10, 0) zig-zag. On the bottom left is a larger (10, 10) armchair nanotube. On the bottom right is a larger (10, 7) chiral nanotube. Some of the different ways a nanotube can be rolled up. The numbers in parentheses are the “chiral vector” of the nanotube and determine its diameter and electronic properties.

Currently, the methods we have to make large amounts of CNTs result in a mix of ones with different chiralities, if not also different sizes. (We have gotten much better at controlling diameter over the last several years.) For mechanical applications, the former isn’t much of a problem. But if you have a bunch of CNTs of different conductivities, it’s hard to use them consistently for electronics.

But maybe carbon nanotubes were always doomed once we discovered graphene. Working from the idea of a CNT as a rolled-up graphene sheet, you may realize that means there are  way more factors that can be varied in a CNT than a single flat flake of graphene. When working with graphene, there are just three main factors to consider:

  • Number of layers. This is similar to the number of walls of a nanotube. Scientists and engineers are generally most excited about single-layer graphene (which is technically the “true” graphene). The electronic properties change dramatically with the number of layers, and somewhere between 10 and 100 layers, you’re not that different from graphite. Again, the methods that produce the most graphene produce multi-layer graphene. But all the graphene made in a single batch will generally have consistent electronic properties.
  • Size. This is typically just one parameter, since most methods to make graphene result in roughly circular, square, or other equally shaped patches. Also, graphene’s properties are less affected by size than CNTs.
  • Defects. This tends to be pretty similar to what we see in CNTs, though in graphene there’s a major question of whether you can use an oxidized form or need the pure graphene for your application, because many production methods make the former first.

Single-layer graphene also has the added quirk of its electrical properties being greatly affected by whatever lies beneath it. However, that may be less of an issue for commercial applications, since whatever substrate is chosen for a given application will consistently affect all graphene on it. In a world where can now make graphene in blenders or just fire up any carbon source ranging from Girl Scout cookies to dead bugs and let it deposit on a metal surface, it can be hard for nanotubes to sustain their appeal when growing them requires additional steps of catalyst production and placement.

But perhaps we’re just leaving a devil we know for a more hyped devil we don’t. Near the end of last year, The New Yorker had a great article on the promises we’re making for graphene, the ones we made for nanotubes, and about technical change in general, which points out that we’re still years away from widespread adoption of either material for any purpose. In the meantime, we’re probably going to keep discovering other interesting nanomaterials, and just like people couldn’t believe we got graphene from sticky tape, we’ll probably be surprised by whatever comes next.