Lynn Conway, Enabler of Microchips

Are you using something with a modern microprocessor on International Women’s Day? (If you’re not, but somehow able to see this post, talk to a doctor. Or a psychic.) You should thank Dr. Lynn Conway, professor emerita of electrical engineering and computer science at Michigan and member of the National Academy of Engineering, who is responsible for two major innovations that are ubiquitous in modern computing. She is most famous for the Mead-Conway revolution, as she developed the “design rules” that are used in Very-Large-Scale Integration architecture, the scheme that basically underlies all modern computer chips. Conway’s rules standardized chip design, making the process faster, easier, and more reliable, and perhaps most significant to broader society, easy to scale down, which is why we are now surrounded by computers.

sscm_cover_page_m

She is less known for her work on dynamic instruction scheduling (DIS). DIS lets a computer program operate out of order, so that later parts of code that do not depend on results of earlier parts can start running instead of letting the whole program stall until certain operations finish. This lets programs run faster and also be more efficient with processor and memory resources. Conway was less known for this work for years because she presented as a man when she began work at IBM. When Conway began her public transition to a woman in 1968, she was fired because the transition was seen as potentially “disruptive” to the work environment. After leaving IBM and completing her transition, Conway lived in “stealth”, which prevented her from publicly taking credit for her work there until the 2000s, when she decided to reach out to someone studying the company’s work on “superscalar” computers in the 60s.

Since coming out, Dr. Conway has been an advocate for trans rights, in science and in society. As a scientist herself, Dr. Conway is very interested in how trans people and the development of gender identity are represented in research. In 2007, she co-authored a paper showing that mental health experts seemed to be dramatically underestimating the number of trans people in the US based just on studies of transition surgeries alone. In 2013 and 2014, Conway worked to make the IEEE’s Code of Ethics inclusive of gender identity and expression.

A good short biography of Dr. Conway can be found here. Or read her writings on her website.

Advertisements

The Demographics of Granola Science

It is taken for granted by a certain segment of our pundit class that Republicans and/or conservatives don’t play well with science. But after a decade of mining that trope, it’s starting to seem thin. Fortunately for the pundits, there’s a whole other half of the political spectrum to criticize with vague framings of science policy. In January, The New Republic published the most recent major piece in this trend about Democrats (or maybe liberals, or who knows, because the article uses them interchangeably) not really following science. Armstrong, like most authors, criticizes the cultural qualities everyone “knows” are emblematic of the left: alternative medicine, an obsession with “natural” products, anti-GMO sentiment, and distrust of nuclear power. But that’s my issue: most of the article, and others like it, is powered by the author’s sentiments on the party in question with little to support them.

Right off the bat in explaining homeopathy, Armstrong has to use the official platform of the Green Party to make one of his points in an article nominally about Democrats. Then Armstrong strains to accept that liberals actually don’t differ from conservatives much in terms of GMO acceptance. Also, weirdly, Armstrong skipped the data that fit the title of his article more: people who identified as or leaned Republican were a bit more likely to say genetically modified food is safe than those who identify as or lean Democrat, 43% to 38%, although the analysis says the differences weren’t statistically significant by party or ideology. (Still, I’m a bit surprised by the swing. I would assume comparing “Democrats” over “liberals” would have gotten rid of more granola-y GMO skeptics.) But that brings me to my main point – so many of these conversations are strange without any data.

Again, we don’t get a data point proving that homeopathy practice is much more common among Democrats/liberals compared to Republicans/conservatives. Technically, Armstrong does link to a YouGov poll claiming that liberals are the “worst” on the issue, but the summary doesn’t include a breakdown by ideology and the link to the full survey results misdirects to one on tax policy. Libertarians and conservatives have proven sympathetic to homeopaths through the health freedom movement. Laws “protecting” practitioners of homeopathy and other alternative medicine systems from charges of unlicensed practice of medicine have passed in blue, red, and purple states: Alaska, Colorado, Georgia, Massachusetts, New York, North Carolina, Ohio, Oklahoma, Oregon, Texas, and Washington. Heck, Walmart and CVS sell homeopathic “medicine” now. It’s worth pointing out that the data I’m asking for here is hard to find. There are some studies on homeopathic demographics, but few are on American populations, and there’s not a great line up with those social and political norms across cultures. (Here’s a UK one and a Norwegian one.) Seriously, American grad students in sociology (of science/tech/medicine) or public health, there’s a great topic for you to dissect here.

I’m open to the idea that homeopathy is more prevalent among liberals, but most of the conversation is just anecdotes and stereotype. But in that case, why does major “alternative medicine” guy Dr. Mercola have a lot of support in conservative circles? Similarly, although vaccination skeptics don’t have to be homeopaths, they’re often stereotyped as liberals. But in 2012, it was two Republican Congressmen who basically heckled an NIH institute director about mercury in vaccines causing autism in a Committee on Oversight and Government Reform hearing. And based on this study by the Cultural Cognition Project, you’re actually least like to question vaccine safety if you’re very liberal and a  bit more likely than average if you’re somewhat conservative, although I don’t think the difference is significant.

ideology-and-risk-perception

You can look at local data for trends, but that doesn’t clearly map out politics either. Silicon Valley daycares allegedly have low vaccination rates, except at the biotech companies. But according to this reporting of California vaccination exemptions, conservative Orange County also has one of the highest rates of exemptions. (Although maybe liberal anti-vaxxers are more likely to cluster together?)

In an article in Policy Review titled “Science, Faith, and Alternative Medicine“, philosopher Ronald Dworkin pointed out that alternative medicine tends to not be a partisan issue:

The confusion surrounding alternative medicine is reflected in the political arena, causing deep divisions within both the liberal and conservative camps. Within the conservative camp, libertarians see any governmental regulation of the alternative medicine movement as a violation of individual freedom. Cultural conservatives, on the other hand, are suspicious of the movement’s links to anti-Western multiculturalism. Within the liberal camp, progressives like Sen. Ted Kennedy and Rep. Henry Waxman have pushed for greater regulation of the alternative medicine industry in the spirit of consumer protection. Yet multiculturalists want alternative medicine to flourish unimpeded because they see it as a powerful weapon to use against traditional Western ideas.

Granted, he wrote than in 2001 and in the same article, he feared that alternative medicine would end up becoming politically polarized. But the data above suggests that still hasn’t happened.

I think we should target liberals who are “bad” on science issues just as much as conservatives, just like the famous Daily Show clip on the Outbreak of Liberal Idiocy. But the trope of these “bad science issues for liberals” misses a lot. Even the Daily Show clip placed it as a kind of specific cultural strain of liberalism (see Bee’s question below to test the qualifications of her vaccine denialist), not liberalism writ large – and the interviewed expert points out that vaccine denialism matches up with wealthy, white, college-educated people. Also, the trope often seems like a weak cop-out, either by conservative writers who think this somehow gets them out of problems with science by their own movements or when liberal or moderate authors attempt to make equivalence out of this as a poor attempt at crossing the aisle but without any clear understanding if the stereotype is true or where it comes from.organic-hair-gel-recipe

(You’ll notice I’m skipping the last section of the article here, and that’s for good reason. Armstrong actually provides a good data point, not just a stereotype, on Democrats skepticism towards nuclear energy. And I agree with his basic argument: unreasonable skepticism towards nuclear just leads to the adoption of other power sources, and right now, those are overwhelmingly fossil fuels. It’s worth pointing out that there’s something interesting in that while polarization increases with scientific literacy on nuclear power, it turns out liberals still accept it more as they become more informed, it’s just that conservatives tend to accept it way faster. Also, can we guarantee this won’t become a NIMBY problem if suddenly our bipartisan elites agreed?)

You might ask, why does this matter? Well, this framing of currently  nonpartisan/nonideological issues as partisan can turn them into ideological markers. So a trope of just assuming Democrats hate GMOs and vaccines and love homeopathy can become a self-fulfilling prophecy, which would go against the goal of most of these articles I’m criticizing (and my own!). There’s probably a decent case to make that this process did happen with the politics of climate change, though I could only offer a vague outline at the moment without digging through more sources. Also the stereotype doesn’t suggest a clear way to deal with the groups who reject the science on these issues. There’s my obvious complaint that I think it ignores that conservatives, and even moderates, (probably) represent a significant share of people with these beliefs.

But even just within liberal groups, there’s diversity in the way these beliefs manifest. You’re not going to convince a center-left Silicon Valley anti-vaxxer to vaccinate with the same approach as some honest-to-goodness hippie because they’re in cultural environments with different values.This also applies to groups on the right. Good science communication accepts that those values matter if you want to engage meaningfully with someone, just as much as the relevant knowledge. And if there’s something we should be learning from last year, it’s that figuring out how to communicate to people with different values is going to be a major part of approaching politics in the future.

Weirdly Specific Questions I Want Answers to in Meta-science, part 1

Using “meta-science” as a somewhat expansive term for history, philosophy, and sociology of science. And using my blog as a place to write about something besides the physical chemistry of carbon nanomaterials in various liquids.

  • To what extent is sloppy/misleading terminology an attempt to cash in on buzzwords? Clearly, we know that motive exists – there aren’t two major papers trying to narrow down precise definitions of graphene-related terms for nothing. But as the papers also suggest, at what point is it a legitimate debate in the community about setting a definition? “Graphene” was a term that described a useful theoretical construct for decades before anyone ever thought* someone could make a real sheet of it, so maybe it isn’t unreasonable that people started using it to describe a variety of physical things related to the original idea.
    • This contains a sort of follow-up: What properties do people use in clarifying these definitions and how much does it vary by background? Personally, I would say I’m way closer to the ideal of “graphene” than lots of people working with more extensively chemically modified graphene derivatives and am fine with using it for almost anything that’s nearly all sp2 carbon with about 10 layers or less. But would a physicist who cares more about the electronic properties, and which vary a lot based on the number of layers even in the lower limit, consider that maddening?
  • Nanoscience is very interdisciplinary/transdisciplinary, but individual researchers can be quite grounded in just one field. How much work is being done where researchers are missing basic knowledge of another field their work is now straddling?
    • For instance, when reading up on polymer nanocomposites, it seems noted by lots of people with extensive polymer science backgrounds that there are many papers that don’t refer to basic aspects of polymer physics. My hunch is that a lot of this comes from the fact that many people in this field started working on the nanoparticles they want to incorporate into the composites and then moved into the composites. They may have backgrounds more in fields like solid-state physics, electrical engineering, or (inorganic/metallic/ceramic) materials science, where they would have been less likely to deal with polymer theory.
    • Similarly, it was noted in one paper I read that a lot of talk about solutions of nanoparticles probably would be more precise if the discussion was framed in terminology of colloids and dispersions.
51cb2b3noc-l-_sx346_bo1204203200_

Oh my gosh, I made fun of the subtitle for like two years, but it’s true

  • Is the ontological status of defects in nanoscience distinct from their treatment in bulk studies of materials? This is a bit related to the first question in that some definitions would preclude the existence of some defects in the referent material/structure.
    • On the other hand, does this stricter treatment make more sense in the few atom limit of many nanomaterials? Chemists can literally specify the type and location of every atom in successful products of well-studied cluster reactions, though these are even pushing the term “nano” (though in the sense they may be too small).
    • Is this a reflection of applications of defects at the different scales? (More philosophically worded, are defects treated differently because of their teleological nature?) At the bulk level, we work to engineer the nature of defects to help develop the properties we want. At the nanoscale, some structures can basically be ruined for certain applications by the mislocation of a single atom. Is this also a reflection of the current practical process of needing to scale up the ability to make nanomaterials? E.g. as more realistic approaches to large-scale nanotech fabrication are developed, will the practical treatment of defects in nanomaterials converge to that of how we treat defects in the bulk?

*Okay, more like anyone cared a lot about it, since there are papers going back to the 1960s where researchers describe what appear to be atomic monolayers of graphite.

Using Plants to Turn Pollution into Profits

Once again, I may prove why I’m a poor writer, by burying a lede. But bear with me here, because this will come full circle and yield some fruit. You probably know that urban farming has become more popular over the last decade or so as local eating became trendy. As city dwellers started their own plots, people realized there might be a unique challenge to urban areas: avoiding lead poisoning. (Although a more recent study evidently suggests you’re getting less lead than people expected.) We used lead in lots of things throughout the 20th century, and it easily accumulated in the soil in areas exposed to high doses of some sources – so cities and areas by busy highways have lead from old gas emissions, old lots have lead from old paint, and even old lead pipes and batteries can leach lead into soils. There are other pollutants that can leach into soils in other places. Mercury and cadmium can build up in places where significant amounts of coal are burned, and many mining practices can result in a lot of the relevant metal leaking out into the environment.

Traditionally, the way to deal with polluted soil is to literally dig it all up. This has a major drawback, in that completely replacing a soil patch also means you throw out some nice perks of the little micro-ecosystem that was developed, like root systems that help prevent erosion or certain nutrient sources. Recently, a new technique called phytoremediation has caught on, and as the NYT article points out, it takes advantage of the fact that some plants are really good at absorbing these metals from the soil. We now know of so-called hyperaccumulators of a lot of different metals and a few other pollutants. These are nice because they concentrate the metals for us into parts of the plants we can easily dispose of, and they can help preserve aspects of the soil we like. (And roots can help prevent erosion of the soil into runoff to boot.) Of course, one drawback here is time. If you’re concerned that a plot with lead might end up leaching it into groundwater, you may not want to wait for a few harvests to go by to get rid of it.

But a second drawback seems like it could present an opportunity. A thing that bugged me when I first heard of hyperaccumulators was that disposing of them still seemed to pose lots of problems. You can burn the plants, but you would need to extract the metals from the fumes, or it just becomes like coal and gas emissions all over again. (Granted, it is a bit easier when you have it concentrated in one place.) Or you can just throw away the plants, but again, you need to make sure you’re doing it in a place that will safely keep the metals as the plants break down. When I got to meet someone who studies how metals accumulate in plants and animals last summer, I asked her if there was a way to do something productive with those plants that now had concentrated valuable metals. Dr. Pickering told me this is called “phytomining”, and that while people looked into it, economic methods still hadn’t been developed.

That looks like it may have changed last month, when a team from China reported making multiple nanomaterials from two common hyperaccumulators. The team studied Brassica juncea, which turns out to be mustard greens, and Sedum alfredii, which is a native herb, and both of which are known to accumulate copper and zinc. The plants were taken from a copper-zinc mine in Liaoning Province, China.  The plants were first dissolved in a mix of nitric and perchloric acid, but literally just heating the acid residue managed to make carbon nanotubes. Adding some ammonia to the acid residue formed zinc oxide nanoparticles in the Sedum, and zinc oxide with a little bit of copper in the mustard greens. What’s really interesting is that the structure and shape of the nanotubes seemed to correlate to the size of the vascular bundles (a plant equivalent to arteries/veins) in the different plants.

nanotube-from-mustard-greens

A nanotube grown from the mustard greens. Source.

But as Dr. Pickering said to me, people have been looking into to this for a while (indeed, the Chinese team has similar papers on this from 5 years ago). What’s needed for phytomining to take off is for it to be economical. And that’s where the end of the paper comes in. First, the individual materials are valuable. The nanotubes are strong and conductive and could have lots of uses. The zinc oxide particles already have some use in solar cells, and could be used in LEDs or as catalysts to help break down organic pollutants like fertilizers. The authors say they managed to make the nanotubes really cheaply compared to other methods: they claimed they could make a kilogram for $120 while bulk prices from commercial suppliers of similar nanotubes is about $600/kg. (And I can’t even find that, because looking at one of my common suppliers, I see multiwalled nanotubes selling on the order of $100 per gram.) What’s really interesting is they claim they can make a composite between the nanotubes and copper/zinc oxide particles that might be even more effective at breaking down pollutants.

I imagine there will be some unforeseen issue in attempting to scale this up (because it seems like there always is). But this is an incredibly cool result. Common plants can help clean up one kind of pollution and be turned into valuable materials to help clean up a second kind of pollution. That’s a win-win.

Comparing Birth Control Trials Today to Those in the 60s Ignores a Sea Change in Research Ethics

Vox has a wonderful article on the recently published male birth control study that is a useful corrective to the narrative that falsely equates it to the original studies of The Pill. Though I say ignore their title, too, because it’s also not that helpful of a narrative either. But the content is useful in arguing against what seems like a terrible and callous framing of the study in most commentary. The key line: “And, yes, the rate of side effects in this study was higher than what women typically experience using hormonal birth control.” Also, can we point out if something like 10 women a year at a school like UVA were committing suicide and it might be linked to a medication they were taking, people would probably be concerned? There’s something disturbing about well-off American women mocking these effects that seemed to disproportionately affect men of color (the most side effects were reported from the Indonesian center, followed by the Chilean center).

My bigger concern here, though, is that most people seem to not understand (or are basically ignoring) how modern research ethics works. For instance, the notion of benefits being weighed in the evaluation of continuing the study aren’t merely the potential benefits of the treatment, but the added benefit of acquiring more data. This was an efficacy study (so I think Phase II, or maybe it was combined Phase I/II, although it might be a really small Phase III trial). It seems like the institutional review board felt enough data had been collected to reach conclusions on efficacy that more data didn’t justify the potential high rate of adverse effects. Which also DOES NOT mean that this treatment has been ruled out forever. The authors themselves recommend further development based on the 75% of participants claiming they would use this birth control method if it were available. I imagine they will tweak the formulation a bit before moving on to further trials. Also, it’s sort of amusing that complaints on this come from people who typically think moves toward regulatory approval are controlled by Big Pharma at the expense of patient health.

Yes, this is different than the initial birth control trials. Yes, the women of Puerto Rico were chosen as human guinea pigs. Though it’s worth pointing out another major factor in choosing Puerto Rico was that it actually had a pretty well organized family planning infrastructure in the 50s and 60s. Admittedly, there’s more racism almost certainly coming into play there, because the politics of family planning were super complicated through the early and mid 20th century and there were definitely overlaps between eugenics and family planning. It’s also worth pointing out the study was encouraged by Margaret Sanger (and earlier studies by Planned Parenthood). Also, the FDA didn’t even initially approve Enovid for contraception because the atmosphere was so repressive back then on reproductive health; it was for menstrual disorders but prescribed off-label for contraception, which is why we know so many women desperately wanted the pill. Heck, even the Puerto Rico study was nominally about seeing if the pill helped with breast cancer. It took another year of discussion by the researchers and companies to get the FDA to finally approve contraception as an on-label use. The company making the pill was actually so concerned about the dosage causing side effects they begged for FDA approval for a lower dose just for contraception (see page 27-28 there) but were rebuffed for another year or two and they refused to market the initial dose for solely for contraception. (Also, to clarify, no one is taking these medications anymore. These versions of the pill were phased out in the 80s.)

Was there sexism at play? Absolutely, and I totally get that. But that doesn’t mean the narrative from 2016 neatly maps onto the narrative of the 1950s and 1960s. Which brings me to my last point. If your view of research ethics is primarily colored by the 1960s, that’s terrifying. You know what else happened at the same time as the initial contraception pill studies? The US government was still letting black men die of syphilis in the name of research. The tissue of Henrietta Lacks was still being cultured without the knowledge or consent of anyone in her family. (And the way they were informed was heartbreaking.) People were unknowingly treated or injected with radioactive material (one of many instances is described here in the segment of testimony by Cliff Honicker). One study involved secretly injecting healthy people with cancer cells, and to prove a theme, those cells were descendants of the ones originally cultured from Henrietta Lacks. Heck, there’s the Milgram experiment and then the Stanford Prison Study was in the 70s. The ethics of human experimentation were a mess for most of the 20th century, and really, most of the history of science. Similarly, medical ethics were very different at the time. Which isn’t to justify those things. But don’t ignore that we’ve been working to make science and research more open, collaborative, and just over the last few decades, and people seem caught up in making humorous or spiteful points than continuing that work right now.

(Other aside, it’s worth pointing out that the comparison here probably does have to be to condoms, which you know, skip the side effects though their typical effectiveness rate is worse. Most of the methods don’t obviously change ejaculate, so unless measuring sperm concentration and motility is a couple’s idea of foreplay, sexual partners who don’t know each other well will still probably want a condom [or unfortunately another method, because yes, the system is sexist and women are expected to do more] as assurance. It’s worth pointing out the study design only worked with “stable” couples who were mutually monogamous and planned on staying together for at least a year during the duration of the study, so there presumably was a high degree of trust in these relationships.)

Reclaiming Science as a Liberal Art

What do you think of when someone talks about the liberal arts? Many of you probably think of subjects like English and literature, history, classics, and philosophy. Those are all a good start for a liberal education, but those are only fields in the humanities. Perhaps you think of the social sciences, to help you understand the institutions and actors in our culture; fields like psychology, sociology, or economics. What about subjects like physics, biology, chemistry, or astronomy? Would you ever think of them as belonging to the liberal arts, or would you cordon them off into the STEM fields? I would argue that excluding the sciences from the liberal arts is both historically wrong and harms society.

First, let’s look at the original conception of the liberal arts. Your study would begin with the trivium, the three subjects of grammar, logic, and rhetoric. The trivium has been described as a progression of study into argument. Grammar is concerned with how things are symbolized. Logic is concerned with how things are understood. Rhetoric is concerned with how things are effectively communicated, because what good is it to understand things if you cannot properly share your understanding to other learned people? With its focus on language, the trivium does fit the common stereotype of the liberal arts as a humanistic writing education.

But it is important to understand that the trivium was considered only the beginning of a liberal arts education. It was followed by the supposedly more “serious” quadrivium of arithmetic, geometry, music, and astronomy. The quadrivium is focused on number and can also be viewed as a progression. Arithmetic teaches you about pure numbers. Geometry looks at number to describe space. Music, as it was taught in the quadrivium, focused on the ratios that produce notes and the description of notes in time. Astronomy comes last, as it builds on this knowledge to understand the mathematical patterns in space and time of bodies in the heavens. Only after completing the quadrivium, when one would have a knowledge of both language and numbers, would a student move on to philosophy or theology, the “queen of the liberal arts”.

7 Liberal Arts

The seven liberal arts surrounding philosophy.

Although this progression might seem strange to some, it makes a lot of sense when you consider that science developed out of “natural philosophy”. Understanding what data and observations mean, whether they are from a normal experiment or “big data”, is a philosophical activity. As my professors say, running an experiment without an understanding of what I was measured makes me a technician, not a scientist. Or consider alchemists, who included many great experimentalists who developed some important chemical insights, but are typically excluded from our conception of science because they worked with different philosophical assumptions. The findings of modern science also tie into major questions that define philosophy. What does it say about our place in the universe if there are 10 billion planets like Earth in our galaxy, or when we are connected to all other living things on Earth through chemistry and evolution?

We get the term liberal arts from Latin, artes liberales, the arts or skills that are befitting of a free person. The children of the privileged would pursue those fields. This was in contrast to the mechanical arts – fields like clothesmaking, agriculture, architecture, martial arts, trade, cooking, and metalworking. The mechanical arts were a decent way for someone without status to make a living, but still considered servile and unbecoming of a free (read “noble”) person. This distinction breaks down in modern life because we are no longer that elitist in our approach to liberal education. We think everyone should be “free”, not just an established elite.

More importantly, in a liberal democracy, we think everyone should have some say in how they are governed. Many major issues in modern society relate to scientific understanding and knowledge. To talk about vaccines, you need to have some understanding of the immune system. The discussion over chemicals is very different when you know that we are made up chemicals. It is hard to understand what is at stake in climate change without a knowledge of how Earth’s various geological and environmental systems work and it is hard to evaluate solutions if you don’t know where energy comes from. Or how can we talk about surveillance without understanding how information is obtained and how it is distributed? The Founding Fathers say they had to study politics and war to win freedom for their new nation. As part of a liberal education, Americans today need to learn to science in order to keep theirs.

(Note: This post is based off a speech I gave as part of a contest at UVA. It reflects a view I think is often unconsidered in education discussions, so I wanted to adapt it into a blog post.

As another aside, it’s incredibly interesting people now tend to unambiguously think of social sciences as part of the liberal arts while wavering more on the natural sciences since the idea of a “social” science wasn’t really developed until well after the conception of the liberal arts.)

Why Can’t You Reach the Speed of Light?

A friend from high school had a good question that I wanted to share:
I have a science question!!! Why can’t we travel the speed of light? We know what it is, and that its constant. We’ve even seen footage of it moving along a path (it was a video clip I saw somewhere [Edit to add: there are now two different experiments that have done this. One that requires multiple repeats of the light pulse and a newer technique that can work with just one). So, what is keeping us from moving at that speed? Is it simply an issue of materials not being able to withstand those speeds, or is it that we can’t even propel ourselves or any object fast enough to reach those speeds? And if its the latter, is it an issue of available space/distance required is unattainable, or is it an issue of the payload needed to propel us is simply too high to calculate/unfeasable (is that even a word?) for the project? Does my question even make sense? I got a strange look when I asked someone else…
 This question makes a lot of sense actually, because when we talk about space travel, people often use light-years to discuss vast distances involved and point out how slow our own methods are in comparison. But it actually turns out the road block is fundamental, not just practical. We can’t reach the speed of light, at least in our current understanding of physics, because relativity says this is impossible.

To put it simply, anything with mass can’t reach the speed of light. This is because E=mc2 works in both directions. This equation means that the energy of something is its mass times the speed of light squared. In chemistry (or a more advanced physics class), you may have talked about the mass defect of some radioactive compounds. The mass defect is the difference in mass before and after certain nuclear reactions, which was actually converted into energy. (This energy is what is exploited in nuclear power and nuclear weapons. Multiplying by the speed of light square means even a little mass equals a lot of energy. The Little Boy bomb dropped on Hiroshima had 140 pounds of uranium, and no more than two pounds of that are believed to have undergone fission to produce the nearly 16 kiloton blast.)

But it also turns out that as something with mass goes faster, its kinetic energy also turns into extra mass. This “relativistic mass” greatly increases as you approach the speed of light. So the faster something gets, the heavier it becomes and the more energy you need to accelerate it. It’s worth pointing out that the accelerating object hasn’t actually gained material – if your spaceship was initially say 20 moles of unobtanium, it is still 20 moles of material even at 99% the speed of light. Instead, the increase in “mass” is due to the geometry of spacetime as the object moves through it. In fact, this is why some physicists don’t like using the term “relativistic mass” and would prefer to focus on the relativistic descriptions of energy and momentum. What’s also really interesting is that the math underlying this in special relativity also implies that anything that doesn’t have mass HAS to travel at the speed of light.

A graph with X-axis showing speed relative to light and Y-axis showing energy. A line representing the kinetic energy the object expoentially increases it approach light speed.

The kinetic energy of a 1 kg object at various fractions of the speed of light. For reference, 10^18 J is about a tenth of United States’ annual electrical energy consumption.

The graph above represents  the (relativistically corrected) kinetic energy of an 1 kilogram (2.2 pound) object at different speeds. You can basically think of it as representing how much energy you need to impart into the object to reach that speed. In the graph, I started at one ten thousandth the speed of light, which is about twice the speed the New Horizons probe was launched at. I ended it at 99.99% of the speed of light. Just to get to 99.999% of the speed of light would have brought the maximum up another order of magnitude.
Edit to add (9/12/2017): A good video from Fermilab argues against relativistic mass, but concedes it helps introduce relativity to more people.