Thinking of the Urban as Natural

Image result for urban ecology

“Name everything you can think of that is alive.” This was the prompt given to three different groups of children: the Wichi, an indigenous tribe in the Gran Chaco forest, and rural and urban Spanish-speakers in Argentina. It might not surprise you to know that the indigenous children who directly interact with wildlife often named the most plants and animals that lived nearby and were native to the region, and they often gave very specific names. The rural children named a mixture of both native Argentinian wildlife and animals associated with farming. But the urban children were very different from the others. They would name only a few animals in Argentina. Instead, they named significantly more “exotic” animals from forests and jungles in other countries and continents. This result has been replicated in multiple studies on child development. But we shouldn’t be so hard on the urban children.

This reflects a somewhat uncomfortable truth about how we learn. If you live in a city, you mainly learn about nature indirectly, through pop culture and formal science education. In both contexts, it is much easier to find information about “exotic” animals like lions or tigers instead of most of the organisms that make a home in the city. I think this is a symptom of a deeper cultural notion: that somehow cities are “fake” environments divorced from nature. I will argue that this distinction between the urban and natural is not only wrong, but also harmful to our society.

First, we should consider that this notion really only makes sense relatively recently in history. Cities are young in a geological and even anthropological sense, but since we’ve been making them as a species, they have been influenced by nature. We talk about “cradles of civilization” because they were places where the natural environment was well-suited to supporting early, complex social systems and their infrastructure. To use the literal Ur-example, consider the Fertile Crescent region, the convergence of the Tigris and Euphrates rivers. This provided lush soil at several elevations, which supported the growth of a variety of crops and helped with irrigation. And many modern cities can still be traced back to earlier environmental decisions. I am from Louisville, a city by a part of the Ohio River that could not be crossed by boat until the building of locks in the 1830s. The city was founded as a natural stopping point for people before they would go on to the Mississippi River.

Second, it seems incredibly alienating to argue most of humanity is “unnatural”. Since 2008, the majority of humans have lived in cities. By 2050, 70% of the global population will live in urban areas. We should not discourage the growth of cities or devalue them, when their more efficient use of resources and infrastructure is necessary to keep projected population growth sustainable. The smart development of cities recognizes they can help preserve other environments.

Finally, this urban-natural distinction distorts our understanding of the environmental and ecological processes that affect cities and even our broader understanding of the environment. A recent study showed that insects help reduce food waste just as much as rodents in New York City – for every memeable “pizza rat” there’s an army of “pizza ants” getting rid of rotting food. Despite their importance, in New York’s American Museum of Natural History renowned insect collection, they have almost no species native to the city. And since many city-dwellers like the Argentinian children only know about exotic species, it affects animal conservation efforts. Well-known “charismatic” species like pandas or rhinos have support all over the world. Few people are aware of endangered species in urban areas And sometimes scientists don’t even know. For instance, relating to the above, 40% of insect species are endangered, but we don’t know if that number is different in cities.

Instead of rejecting the last few thousand years of our society’s development, we should (re)embrace cities as part of the broader natural world. Recognizing that cities can have their own rich ecological and environmental interactions can help us build urban spaces that are better for us humans, other city-dwelling creatures, and the rest of the world.

(Note: This post is based on a speech I gave as part of a contest at UVA, the Moomaw Oratorical Contest. And this year I won!)


I Have a Hard Time Summing Up My Science and Politics Beliefs Into a Slogan

From a half-joking, half-serious post of my own on Facebook:



Evidently, I am the alt-text from this comic.

“HERE ARE SOME GOOD ARTICLES ABOUT PHILOSOPHY AND SOCIOLOGY OF SCIENCE” (I didn’t actually give a list, since I knew I would never really be able to put that on a poster, but some suggested readings if you’re interested: the Decolonizing Science Reading List curated by astrophysicist Chanda Prescod-Weinstein, a recent article from The Atlantic about the March for Science, a perspective on Doing Science While Black, the history of genes as an example of the evolution of scientific ideas, honestly there’s a lot here, and this is just stuff I shared on my Facebook page over the last few months.)

Quantum Waves are Still Physical, Regardless of Your Thoughts

Adam Frank, founder of NPR’s science and culture blog 13.7, recently published an essay on Aeon about materialism. It’s a bit confusing to get at what he’s trying to say because of the different focus its two titles have, as well as his own arguments. First, the titles. The title I saw first, which is what is displayed when shared on Facebook, is “Materialism alone cannot explain the riddle of consciousness”. But on Aeon, the title is “Minding matter”, with the sub-title or blurb of “The closer your look, the more the materialist position in physics appears to rest on shaky metaphysical ground.” The question of theories of mind is very different than philosophical interpretations of quantum mechanics.

This shows up in the article, where I found it confusing because Franks ties together several different arguments and confuses them with various ideas of “realism” and “materialism”. First, his conception of theories of mind is confusing. I’d say the average modern neuroscientist or other scholar of cognition is a materialist, but I’d be hesitant to say the average one is a reductionist who thinks thought depends very hard on the atoms in your brain. Computational theories of mind tend to be some of the most popular ones, and it’s hard to consider those reductionist. I would concede there may be too much of an experimental focus on reductionism (and that’s what has diffused into pop culture), but the debate over how to move from those experimental techniques to theoretical understanding is occurring: see the recent attempt at using neuroscience statistical techniques to understand Donkey Kong.

I also think he’s making a bit of an odd claim on reductionism in the other sciences in this passage:

A century of agnosticism about the true nature of matter hasn’t found its way deeply enough into other fields, where materialism still appears to be the most sensible way of dealing with the world and, most of all, with the mind. Some neuroscientists think that they’re being precise and grounded by holding tightly to materialist credentials. Molecular biologists, geneticists, and many other types of researchers – as well as the nonscientist public – have been similarly drawn to materialism’s seeming finality.

Yes, he technically calls it materialism, but he seems to basically equate it to reductionism by assuming the other sciences seem fine with being reducible to physics. But, first, Frank should know better from his own colleagues. The solid-state folks in his department work a lot with “emergentism” and point out that the supposedly more reductionist particle people now borrow concepts from them. And he should definitely know from his collaborators at 13.7 that the concept of reducibility is controversial across the sciences. Heck, even physical chemists take issue with being reducible to physics and will point out that QM models can’t fully reproduce aspects of the periodic table. Per the above, it’s worth pointing out that Jerry Fodor, a philosopher of mind and cognitive scientist, who does believe in a computational theory of mind disputes the idea of reductionism


This is funny because this tends to be controversial, not because it’s widely accepted.

Frank’s view on the nature of matter is also confusing. Here he seems to be suggesting “materialism” can really only refer to particulate theories of matter, e.g. something an instrument could definitely touch (in theory). But modern fundamental physics does accept fields and waves as real entities. “Shut up and calculate” isn’t useful for ontology or epistemology, but his professor’s pithy response actually isn’t that. Quantum field theories would agree that “an electron is that we attribute the properties of the electron” since electrons (and any particles) can actually take on any value of mass, charge, spin, etc. as virtual particles (which actually do exist, but only temporarily). The conventional values are what one gets in the process of renormalization in the theory. (I might be misstating that here, since I never actually got to doing QFT myself.) I would say this doesn’t mean electrons aren’t “real” or understood, but it would suggest that quantum fields are ontologically more fundamental than the particles are. If it makes more physical sense for an electron to be a probability wave, that’s bully for probability waves, not a lack of understanding. (Also, aside from experiments showing wave-particle duality, we’re now learning that even biochemistry is dependent on the wave nature of matter.)

I’m also not sure the discussion of wave function collapse does much work here. I don’t get why it would inherently undermine materialism, unless a consciousness interpretation were to win out, and as Frank admits, there’s still not much to support one interpretation over the other. (And even then, again, this could still be solved by a materialist view of consciousness.) He’s also ignoring the development of theories of quantum decoherence to explain wavefunction collapse as quantum systems interact with classical environments, and to my understanding, those are relatively agnostic to interpretation. (Although I think there’s an issue with timescales in quantitative descriptions.)

From there, Frank says we should be open to things beyond “materialism” in describing mind. But like my complaint with the title differences, those arguments don’t really follow from the bulk of the article focusing on philosophical issues in quantum mechanics. Also, he seems open to emergentism in the second to last paragraph. Actually, here I think Frank missed out on a great discussion. I think there are some great philosophy of science questions to be had at the level of QFT, especially with regards to epistemology, and especially directed to popular audiences. Even as a physics major, my main understanding of specific aspects of the framework like renormalization are accepted because “the math works”, which is different than other observables we measure. For instance, the anomalous magnetic moment is a very high precision test of quantum electrodynamics, the quantum field theory of electromagnetism, and our calculation is based on renormalization. But the “unreasonable effectiveness of mathematics” can sometimes be wrong and we might lucky in converging to something close. (Though at this point I might be pulling dangerously close to the Duhem-Quine thesis without knowing much of the technical details.) Instead, we got a mediocre crossover between the question of consciousness and interpretations of quantum mechanics, even though Frank tried hard to avoid turning into “woo”.

Lynn Conway, Enabler of Microchips

Are you using something with a modern microprocessor on International Women’s Day? (If you’re not, but somehow able to see this post, talk to a doctor. Or a psychic.) You should thank Dr. Lynn Conway, professor emerita of electrical engineering and computer science at Michigan and member of the National Academy of Engineering, who is responsible for two major innovations that are ubiquitous in modern computing. She is most famous for the Mead-Conway revolution, as she developed the “design rules” that are used in Very-Large-Scale Integration architecture, the scheme that basically underlies all modern computer chips. Conway’s rules standardized chip design, making the process faster, easier, and more reliable, and perhaps most significant to broader society, easy to scale down, which is why we are now surrounded by computers.


She is less known for her work on dynamic instruction scheduling (DIS). DIS lets a computer program operate out of order, so that later parts of code that do not depend on results of earlier parts can start running instead of letting the whole program stall until certain operations finish. This lets programs run faster and also be more efficient with processor and memory resources. Conway was less known for this work for years because she presented as a man when she began work at IBM. When Conway began her public transition to a woman in 1968, she was fired because the transition was seen as potentially “disruptive” to the work environment. After leaving IBM and completing her transition, Conway lived in “stealth”, which prevented her from publicly taking credit for her work there until the 2000s, when she decided to reach out to someone studying the company’s work on “superscalar” computers in the 60s.

Since coming out, Dr. Conway has been an advocate for trans rights, in science and in society. As a scientist herself, Dr. Conway is very interested in how trans people and the development of gender identity are represented in research. In 2007, she co-authored a paper showing that mental health experts seemed to be dramatically underestimating the number of trans people in the US based just on studies of transition surgeries alone. In 2013 and 2014, Conway worked to make the IEEE’s Code of Ethics inclusive of gender identity and expression.

A good short biography of Dr. Conway can be found here. Or read her writings on her website.

The Demographics of Granola Science

It is taken for granted by a certain segment of our pundit class that Republicans and/or conservatives don’t play well with science. But after a decade of mining that trope, it’s starting to seem thin. Fortunately for the pundits, there’s a whole other half of the political spectrum to criticize with vague framings of science policy. In January, The New Republic published the most recent major piece in this trend about Democrats (or maybe liberals, or who knows, because the article uses them interchangeably) not really following science. Armstrong, like most authors, criticizes the cultural qualities everyone “knows” are emblematic of the left: alternative medicine, an obsession with “natural” products, anti-GMO sentiment, and distrust of nuclear power. But that’s my issue: most of the article, and others like it, is powered by the author’s sentiments on the party in question with little to support them.

Right off the bat in explaining homeopathy, Armstrong has to use the official platform of the Green Party to make one of his points in an article nominally about Democrats. Then Armstrong strains to accept that liberals actually don’t differ from conservatives much in terms of GMO acceptance. Also, weirdly, Armstrong skipped the data that fit the title of his article more: people who identified as or leaned Republican were a bit more likely to say genetically modified food is safe than those who identify as or lean Democrat, 43% to 38%, although the analysis says the differences weren’t statistically significant by party or ideology. (Still, I’m a bit surprised by the swing. I would assume comparing “Democrats” over “liberals” would have gotten rid of more granola-y GMO skeptics.) But that brings me to my main point – so many of these conversations are strange without any data.

Again, we don’t get a data point proving that homeopathy practice is much more common among Democrats/liberals compared to Republicans/conservatives. Technically, Armstrong does link to a YouGov poll claiming that liberals are the “worst” on the issue, but the summary doesn’t include a breakdown by ideology and the link to the full survey results misdirects to one on tax policy. Libertarians and conservatives have proven sympathetic to homeopaths through the health freedom movement. Laws “protecting” practitioners of homeopathy and other alternative medicine systems from charges of unlicensed practice of medicine have passed in blue, red, and purple states: Alaska, Colorado, Georgia, Massachusetts, New York, North Carolina, Ohio, Oklahoma, Oregon, Texas, and Washington. Heck, Walmart and CVS sell homeopathic “medicine” now. It’s worth pointing out that the data I’m asking for here is hard to find. There are some studies on homeopathic demographics, but few are on American populations, and there’s not a great line up with those social and political norms across cultures. (Here’s a UK one and a Norwegian one.) Seriously, American grad students in sociology (of science/tech/medicine) or public health, there’s a great topic for you to dissect here.

I’m open to the idea that homeopathy is more prevalent among liberals, but most of the conversation is just anecdotes and stereotype. But in that case, why does major “alternative medicine” guy Dr. Mercola have a lot of support in conservative circles? Similarly, although vaccination skeptics don’t have to be homeopaths, they’re often stereotyped as liberals. But in 2012, it was two Republican Congressmen who basically heckled an NIH institute director about mercury in vaccines causing autism in a Committee on Oversight and Government Reform hearing. And based on this study by the Cultural Cognition Project, you’re actually least like to question vaccine safety if you’re very liberal and a  bit more likely than average if you’re somewhat conservative, although I don’t think the difference is significant.


You can look at local data for trends, but that doesn’t clearly map out politics either. Silicon Valley daycares allegedly have low vaccination rates, except at the biotech companies. But according to this reporting of California vaccination exemptions, conservative Orange County also has one of the highest rates of exemptions. (Although maybe liberal anti-vaxxers are more likely to cluster together?)

In an article in Policy Review titled “Science, Faith, and Alternative Medicine“, philosopher Ronald Dworkin pointed out that alternative medicine tends to not be a partisan issue:

The confusion surrounding alternative medicine is reflected in the political arena, causing deep divisions within both the liberal and conservative camps. Within the conservative camp, libertarians see any governmental regulation of the alternative medicine movement as a violation of individual freedom. Cultural conservatives, on the other hand, are suspicious of the movement’s links to anti-Western multiculturalism. Within the liberal camp, progressives like Sen. Ted Kennedy and Rep. Henry Waxman have pushed for greater regulation of the alternative medicine industry in the spirit of consumer protection. Yet multiculturalists want alternative medicine to flourish unimpeded because they see it as a powerful weapon to use against traditional Western ideas.

Granted, he wrote than in 2001 and in the same article, he feared that alternative medicine would end up becoming politically polarized. But the data above suggests that still hasn’t happened.

I think we should target liberals who are “bad” on science issues just as much as conservatives, just like the famous Daily Show clip on the Outbreak of Liberal Idiocy. But the trope of these “bad science issues for liberals” misses a lot. Even the Daily Show clip placed it as a kind of specific cultural strain of liberalism (see Bee’s question below to test the qualifications of her vaccine denialist), not liberalism writ large – and the interviewed expert points out that vaccine denialism matches up with wealthy, white, college-educated people. Also, the trope often seems like a weak cop-out, either by conservative writers who think this somehow gets them out of problems with science by their own movements or when liberal or moderate authors attempt to make equivalence out of this as a poor attempt at crossing the aisle but without any clear understanding if the stereotype is true or where it comes

(You’ll notice I’m skipping the last section of the article here, and that’s for good reason. Armstrong actually provides a good data point, not just a stereotype, on Democrats skepticism towards nuclear energy. And I agree with his basic argument: unreasonable skepticism towards nuclear just leads to the adoption of other power sources, and right now, those are overwhelmingly fossil fuels. It’s worth pointing out that there’s something interesting in that while polarization increases with scientific literacy on nuclear power, it turns out liberals still accept it more as they become more informed, it’s just that conservatives tend to accept it way faster. Also, can we guarantee this won’t become a NIMBY problem if suddenly our bipartisan elites agreed?)

You might ask, why does this matter? Well, this framing of currently  nonpartisan/nonideological issues as partisan can turn them into ideological markers. So a trope of just assuming Democrats hate GMOs and vaccines and love homeopathy can become a self-fulfilling prophecy, which would go against the goal of most of these articles I’m criticizing (and my own!). There’s probably a decent case to make that this process did happen with the politics of climate change, though I could only offer a vague outline at the moment without digging through more sources. Also the stereotype doesn’t suggest a clear way to deal with the groups who reject the science on these issues. There’s my obvious complaint that I think it ignores that conservatives, and even moderates, (probably) represent a significant share of people with these beliefs.

But even just within liberal groups, there’s diversity in the way these beliefs manifest. You’re not going to convince a center-left Silicon Valley anti-vaxxer to vaccinate with the same approach as some honest-to-goodness hippie because they’re in cultural environments with different values.This also applies to groups on the right. Good science communication accepts that those values matter if you want to engage meaningfully with someone, just as much as the relevant knowledge. And if there’s something we should be learning from last year, it’s that figuring out how to communicate to people with different values is going to be a major part of approaching politics in the future.

Weirdly Specific Questions I Want Answers to in Meta-science, part 1

Using “meta-science” as a somewhat expansive term for history, philosophy, and sociology of science. And using my blog as a place to write about something besides the physical chemistry of carbon nanomaterials in various liquids.

  • To what extent is sloppy/misleading terminology an attempt to cash in on buzzwords? Clearly, we know that motive exists – there aren’t two major papers trying to narrow down precise definitions of graphene-related terms for nothing. But as the papers also suggest, at what point is it a legitimate debate in the community about setting a definition? “Graphene” was a term that described a useful theoretical construct for decades before anyone ever thought* someone could make a real sheet of it, so maybe it isn’t unreasonable that people started using it to describe a variety of physical things related to the original idea.
    • This contains a sort of follow-up: What properties do people use in clarifying these definitions and how much does it vary by background? Personally, I would say I’m way closer to the ideal of “graphene” than lots of people working with more extensively chemically modified graphene derivatives and am fine with using it for almost anything that’s nearly all sp2 carbon with about 10 layers or less. But would a physicist who cares more about the electronic properties, and which vary a lot based on the number of layers even in the lower limit, consider that maddening?
  • Nanoscience is very interdisciplinary/transdisciplinary, but individual researchers can be quite grounded in just one field. How much work is being done where researchers are missing basic knowledge of another field their work is now straddling?
    • For instance, when reading up on polymer nanocomposites, it seems noted by lots of people with extensive polymer science backgrounds that there are many papers that don’t refer to basic aspects of polymer physics. My hunch is that a lot of this comes from the fact that many people in this field started working on the nanoparticles they want to incorporate into the composites and then moved into the composites. They may have backgrounds more in fields like solid-state physics, electrical engineering, or (inorganic/metallic/ceramic) materials science, where they would have been less likely to deal with polymer theory.
    • Similarly, it was noted in one paper I read that a lot of talk about solutions of nanoparticles probably would be more precise if the discussion was framed in terminology of colloids and dispersions.

Oh my gosh, I made fun of the subtitle for like two years, but it’s true

  • Is the ontological status of defects in nanoscience distinct from their treatment in bulk studies of materials? This is a bit related to the first question in that some definitions would preclude the existence of some defects in the referent material/structure.
    • On the other hand, does this stricter treatment make more sense in the few atom limit of many nanomaterials? Chemists can literally specify the type and location of every atom in successful products of well-studied cluster reactions, though these are even pushing the term “nano” (though in the sense they may be too small).
    • Is this a reflection of applications of defects at the different scales? (More philosophically worded, are defects treated differently because of their teleological nature?) At the bulk level, we work to engineer the nature of defects to help develop the properties we want. At the nanoscale, some structures can basically be ruined for certain applications by the mislocation of a single atom. Is this also a reflection of the current practical process of needing to scale up the ability to make nanomaterials? E.g. as more realistic approaches to large-scale nanotech fabrication are developed, will the practical treatment of defects in nanomaterials converge to that of how we treat defects in the bulk?

*Okay, more like anyone cared a lot about it, since there are papers going back to the 1960s where researchers describe what appear to be atomic monolayers of graphite.

Using Plants to Turn Pollution into Profits

Once again, I may prove why I’m a poor writer, by burying a lede. But bear with me here, because this will come full circle and yield some fruit. You probably know that urban farming has become more popular over the last decade or so as local eating became trendy. As city dwellers started their own plots, people realized there might be a unique challenge to urban areas: avoiding lead poisoning. (Although a more recent study evidently suggests you’re getting less lead than people expected.) We used lead in lots of things throughout the 20th century, and it easily accumulated in the soil in areas exposed to high doses of some sources – so cities and areas by busy highways have lead from old gas emissions, old lots have lead from old paint, and even old lead pipes and batteries can leach lead into soils. There are other pollutants that can leach into soils in other places. Mercury and cadmium can build up in places where significant amounts of coal are burned, and many mining practices can result in a lot of the relevant metal leaking out into the environment.

Traditionally, the way to deal with polluted soil is to literally dig it all up. This has a major drawback, in that completely replacing a soil patch also means you throw out some nice perks of the little micro-ecosystem that was developed, like root systems that help prevent erosion or certain nutrient sources. Recently, a new technique called phytoremediation has caught on, and as the NYT article points out, it takes advantage of the fact that some plants are really good at absorbing these metals from the soil. We now know of so-called hyperaccumulators of a lot of different metals and a few other pollutants. These are nice because they concentrate the metals for us into parts of the plants we can easily dispose of, and they can help preserve aspects of the soil we like. (And roots can help prevent erosion of the soil into runoff to boot.) Of course, one drawback here is time. If you’re concerned that a plot with lead might end up leaching it into groundwater, you may not want to wait for a few harvests to go by to get rid of it.

But a second drawback seems like it could present an opportunity. A thing that bugged me when I first heard of hyperaccumulators was that disposing of them still seemed to pose lots of problems. You can burn the plants, but you would need to extract the metals from the fumes, or it just becomes like coal and gas emissions all over again. (Granted, it is a bit easier when you have it concentrated in one place.) Or you can just throw away the plants, but again, you need to make sure you’re doing it in a place that will safely keep the metals as the plants break down. When I got to meet someone who studies how metals accumulate in plants and animals last summer, I asked her if there was a way to do something productive with those plants that now had concentrated valuable metals. Dr. Pickering told me this is called “phytomining”, and that while people looked into it, economic methods still hadn’t been developed.

That looks like it may have changed last month, when a team from China reported making multiple nanomaterials from two common hyperaccumulators. The team studied Brassica juncea, which turns out to be mustard greens, and Sedum alfredii, which is a native herb, and both of which are known to accumulate copper and zinc. The plants were taken from a copper-zinc mine in Liaoning Province, China.  The plants were first dissolved in a mix of nitric and perchloric acid, but literally just heating the acid residue managed to make carbon nanotubes. Adding some ammonia to the acid residue formed zinc oxide nanoparticles in the Sedum, and zinc oxide with a little bit of copper in the mustard greens. What’s really interesting is that the structure and shape of the nanotubes seemed to correlate to the size of the vascular bundles (a plant equivalent to arteries/veins) in the different plants.


A nanotube grown from the mustard greens. Source.

But as Dr. Pickering said to me, people have been looking into to this for a while (indeed, the Chinese team has similar papers on this from 5 years ago). What’s needed for phytomining to take off is for it to be economical. And that’s where the end of the paper comes in. First, the individual materials are valuable. The nanotubes are strong and conductive and could have lots of uses. The zinc oxide particles already have some use in solar cells, and could be used in LEDs or as catalysts to help break down organic pollutants like fertilizers. The authors say they managed to make the nanotubes really cheaply compared to other methods: they claimed they could make a kilogram for $120 while bulk prices from commercial suppliers of similar nanotubes is about $600/kg. (And I can’t even find that, because looking at one of my common suppliers, I see multiwalled nanotubes selling on the order of $100 per gram.) What’s really interesting is they claim they can make a composite between the nanotubes and copper/zinc oxide particles that might be even more effective at breaking down pollutants.

I imagine there will be some unforeseen issue in attempting to scale this up (because it seems like there always is). But this is an incredibly cool result. Common plants can help clean up one kind of pollution and be turned into valuable materials to help clean up a second kind of pollution. That’s a win-win.