Comparing Birth Control Trials Today to Those in the 60s Ignores a Sea Change in Research Ethics

Vox has a wonderful article on the recently published male birth control study that is a useful corrective to the narrative that falsely equates it to the original studies of The Pill. Though I say ignore their title, too, because it’s also not that helpful of a narrative either. But the content is useful in arguing against what seems like a terrible and callous framing of the study in most commentary. The key line: “And, yes, the rate of side effects in this study was higher than what women typically experience using hormonal birth control.” Also, can we point out if something like 10 women a year at a school like UVA were committing suicide and it might be linked to a medication they were taking, people would probably be concerned? There’s something disturbing about well-off American women mocking these effects that seemed to disproportionately affect men of color (the most side effects were reported from the Indonesian center, followed by the Chilean center).

My bigger concern here, though, is that most people seem to not understand (or are basically ignoring) how modern research ethics works. For instance, the notion of benefits being weighed in the evaluation of continuing the study aren’t merely the potential benefits of the treatment, but the added benefit of acquiring more data. This was an efficacy study (so I think Phase II, or maybe it was combined Phase I/II, although it might be a really small Phase III trial). It seems like the institutional review board felt enough data had been collected to reach conclusions on efficacy that more data didn’t justify the potential high rate of adverse effects. Which also DOES NOT mean that this treatment has been ruled out forever. The authors themselves recommend further development based on the 75% of participants claiming they would use this birth control method if it were available. I imagine they will tweak the formulation a bit before moving on to further trials. Also, it’s sort of amusing that complaints on this come from people who typically think moves toward regulatory approval are controlled by Big Pharma at the expense of patient health.

Yes, this is different than the initial birth control trials. Yes, the women of Puerto Rico were chosen as human guinea pigs. Though it’s worth pointing out another major factor in choosing Puerto Rico was that it actually had a pretty well organized family planning infrastructure in the 50s and 60s. Admittedly, there’s more racism almost certainly coming into play there, because the politics of family planning were super complicated through the early and mid 20th century and there were definitely overlaps between eugenics and family planning. It’s also worth pointing out the study was encouraged by Margaret Sanger (and earlier studies by Planned Parenthood). Also, the FDA didn’t even initially approve Enovid for contraception because the atmosphere was so repressive back then on reproductive health; it was for menstrual disorders but prescribed off-label for contraception, which is why we know so many women desperately wanted the pill. Heck, even the Puerto Rico study was nominally about seeing if the pill helped with breast cancer. It took another year of discussion by the researchers and companies to get the FDA to finally approve contraception as an on-label use. The company making the pill was actually so concerned about the dosage causing side effects they begged for FDA approval for a lower dose just for contraception (see page 27-28 there) but were rebuffed for another year or two and they refused to market the initial dose for solely for contraception. (Also, to clarify, no one is taking these medications anymore. These versions of the pill were phased out in the 80s.)

Was there sexism at play? Absolutely, and I totally get that. But that doesn’t mean the narrative from 2016 neatly maps onto the narrative of the 1950s and 1960s. Which brings me to my last point. If your view of research ethics is primarily colored by the 1960s, that’s terrifying. You know what else happened at the same time as the initial contraception pill studies? The US government was still letting black men die of syphilis in the name of research. The tissue of Henrietta Lacks was still being cultured without the knowledge or consent of anyone in her family. (And the way they were informed was heartbreaking.) People were unknowingly treated or injected with radioactive material (one of many instances is described here in the segment of testimony by Cliff Honicker). One study involved secretly injecting healthy people with cancer cells, and to prove a theme, those cells were descendants of the ones originally cultured from Henrietta Lacks. Heck, there’s the Milgram experiment and then the Stanford Prison Study was in the 70s. The ethics of human experimentation were a mess for most of the 20th century, and really, most of the history of science. Similarly, medical ethics were very different at the time. Which isn’t to justify those things. But don’t ignore that we’ve been working to make science and research more open, collaborative, and just over the last few decades, and people seem caught up in making humorous or spiteful points than continuing that work right now.

(Other aside, it’s worth pointing out that the comparison here probably does have to be to condoms, which you know, skip the side effects though their typical effectiveness rate is worse. Most of the methods don’t obviously change ejaculate, so unless measuring sperm concentration and motility is a couple’s idea of foreplay, sexual partners who don’t know each other well will still probably want a condom [or unfortunately another method, because yes, the system is sexist and women are expected to do more] as assurance. It’s worth pointing out the study design only worked with “stable” couples who were mutually monogamous and planned on staying together for at least a year during the duration of the study, so there presumably was a high degree of trust in these relationships.)

Advertisements

Let’s Rethink Science Journalism

There’s been a lot of talk about science journalism after the revelation that a heavily publicized study about chocolate helping weight loss was actually a sham. A great deal of this is meta-commentary about whether or not the whole “sting” was ethical or if it even added much to ongoing discussions on science communication. It’s worth pointing out that science journalism in major outlets could be said to work for the most part, as they didn’t actually report on the study. The ScienceNews piece points out that a Washington Post reporter did want to write up something on the study and dropped it when he became suspicious. HuffPo would be the obvious exception in that they evidently had TWO pieces at one point on the study, but it’s science and health sections have historically been pretty questionable. (The science section has gotten better lately. I don’t know about the health section.)

I’m going to mainly focus on science in general publications, because that’s what most people see. And because science journalism in general publications has a weird organization. The standard treatment seems to be that a science journalist should be able to write on any science topic, regardless of background. That increasingly strikes me as strange. The conceptual difference between, say, astronomy and neuroscience is huge. That’s not to say people can’t be good at covering multiple fields of science. Rachel Feltman at The Washington Post wonderfully covers developments from all over science. But I think we should recognize that this is an incredible talent that not everyone has. (Indeed, going over HuffPo’s recent pieces, it’s notable how many seem to come from actual scientists now compared to what seemed like a never-ending stream of uncredited articles probably coming from anyone with an Internet connection a few years ago.)

A man is shown looking slightly up. Floating above his head are a moon, frog, butterflies, crystals, and some other objects, perhaps representing his thoughts or ideas.

It’s hard to actually have all this in your head.

Pretending that all science writers can cover everything harms science journalism. Where I think this shows up particularly clear is coverage of work done by children. For instance, consider last year’s story about the 12-year-old who supposedly made a major breakthrough about lionfish. Let’s be clear: Lauren did a lot of research for a 12-year-old and contributed a lot to a science lab and we should celebrate that. But so many outlets either exagerrated the claims of her father or took his overly hyped claims too much at face value, because it seems like none of these original reporters had any idea where her project fit in with other research. Similarly, there was the 15-year-old who said to have “invented a way to charge your phone”, but his project was similar to research that has been done for years (but again, Angelo ended up doing a lot of work for his age and seemed to develop a way to make it more effective).

I don’t think there’s a reason why a publication couldn’t cover all its science section by having more specialized journalists who also happened to work outside of science. For example, maybe someone covering physical sciences could also cover engineering and manufacturing firms for business reporting and someone else could be on a combined life sciences/health beat. And someone who can specialize and keep up to date on a smaller area can probably toss out names that better reflect the diversity of the research community instead of just pulling up the same few powerful people who typically get referenced . In fact, probably one of the best trends in science coverage over the last decade has been the proliferation of pieces focusing on social implications of science and also pieces that focus on how science is shaped by society. Reporting like that would benefit from more journalists and communicators who cover things both inside and outside of science and can give voice to diverse groups. And also, it would be great if these pieces actually called on scholars in the sociology, history, and/or philosophy of science and technology to help inform these pieces.

It is an image announcing a panel discussion, entitled

Discussions like this reflect important discussions in society that need to happen in science, too. And they’re at their best when people can understand science and society.

Scientists and “Being Smart”, part 2: “Geekery” and What Happened to Technical Knowledge

For the purposes of this article, I’m treating “nerdiness” and “geekiness” as the same thing. If that bothers you, there’s millions of other pages on the Internet that care about this difference. Also, I’m sort of abusing “technical” here, but bear with me.

I loved Chad Orzel’s quote from last time, and I wanted to dissect one part a bit more:

We’re set off even from other highly educated academics — my faculty colleagues in arts, literature, and social science don’t hear that same “You must be really smart” despite the fact that they’ve generally spent at least as much time acquiring academic credentials as I have. The sort of scholarship they do is seen as just an extension of normal activities, whereas science is seen as alien and incomprehensible.

In particular, I wanted to point this out in the context of a sort of backlash against the idea that nerdiness/geekiness should be embraced as some part of science communication. Here’s the thing that bothers me about those pieces: while our society views specialized knowledge of STEM as less cultured than equally specialized knowledge in the humanities, then it will probably always be seen as intrinsically nerdy just to have studied science and engineering. For argument’s sake, I actually do have something in mind based on comparing courses in different departments at Rice and UVA. As an example of some basic idea of specialized scientific knowledge, I’m thinking of a typical sophomore modern physics class that includes a mostly algebra-based introduction to relativity or single variable quantum mechanics. For a roughly equivalent idea of specialized humanities knowledge,  courses at a similar level include a first course on metaphysics in philosophy and English courses focused on single authors. Quote Chaucer at a cocktail party? Congrats, you’re culturally literate! Mention that quantum mechanics is needed to describe any modern digital electronic device or that GPS requires relativistic corrections? I hate to disagree with someone doing work as cool as Tricia Berry, but sorry, you will almost certainly be considered a nerd for knowing that.

Should we care about this? Yes. It’s the same impulse that lets Martin Eve write off science and engineering open access advocates as just some corporatist movement or maybe just useful idiots of some other cultural force, and not some meaningful aspect of how scientists and engineers themselves want to approach the broader culture. And I don’t think this is new. CP Snow wrote about the “two cultures” over 50 years ago, complaining about the increasing division between literary culture and science and technology. I just think that now instead of ignoring scientists, which was what worried Snow, we now laud them in a way removed from mainstream culture by putting it in some geek/nerd offshoot. We see this in media about science. Scientists in movies are almost never full people with rich emotional and social lives, because, as this review of the Alan Turing biopic The Imitation Game points out, the convention is nearly always that they are more like machines trying to get along with humans. (I also feel sort of justified in this idea when an English PhD at least partially agreed with me when I argued that Bill Gates or Steve Jobs might count as “Renaissance men” today but culture seems uncomfortable applying that label to contemporary people whose original background was primarily technical.)

As I was writing this, I realized this may be a broadening trend that seems to separate technical knowledge in areas outside of science and engineering from their own related fields. Consider the distinction between how people discuss politics and policy. I know they’re not equivalent, but it seems interesting to me that readings of some theorist mainly approached in senior-level political science or philosophy makes you cultured, but trying to use anything beyond intro economics to talk about policy implementation seems to be unquestionably “wonky”. And I say that as someone with virtually no econ or policy training. Heck, Ezra Klein practically owns the idea of being a wonk, and he’s not an economist.

Over winter break, I got the chance to see a friend from high school who is currently working towards a master’s in public administration. We’re both at about similar stages in our graduate programs and we both talked about what we studied. She had her own deep technical knowledge in her field, but she commented that people often didn’t understand the idea of scientific management as a discipline and didn’t seem to appreciate that someone could actually systematically study team hierarchies and suggest better ways to organize. I think part of that is what I touched on in the first part of my rant and Orzel’s idea that people just seem to think of humanistic studies as just “extensions of normal”. But I also think part of that is some cultural lack of interest in, and understanding of, technical knowledge.

I don’t want to fall into some stereotypical scientist trap and write off ideas of fundamental truths or downplay the importance of ethics, culture, and other things generally considered liberal arts or humanistic. I just think that if Snow were writing today, he might say that intellectual seems to be an even narrower category that now no longer recognizes the idea of doing something with that intellect. And that seems like a real problem.

Public Health Involves Science Communication, Too

This tweet has been making the rounds on social media lately.

I actually think the tweet is funny, but I’m really tired with the way media seems to be considering actual policy concerns with it. Cutting off flights would have seriously hampered the Ebola response. But there is in fact a different policy used for traveling to/from regions with vaccine-preventable outbreaks: it is often recommended that you go and get the vaccine before travelling there or if you are from a region with an outbreak, you may be asked to prove you have been immunized. It would be perfectly reasonable for Nigeria and other countries to demand American travellers prove that they are vaccinated against measles as part of obtaining a visa. That policy isn’t possible with diseases without vaccines that we don’t have effective, standard treatments for.

And this has become an increasing concern of mine with so much of the coverage about the measles outbreak. There is actually a well-documented literature about effective science communication, but based on news articles, you wouldn’t know it exists. The idea that science communication is only about filling people’s head with scientific knowledge (the “bucket model”) has been discredited for over 20 years. Treating your audience snarkily like they know nothing (or really, treating your actual, narrow audience like they’re geniuses and everyone in the outgroup like they’re insane) has never really been shown to be effective in technical matters despite half the business model of Mic and Gawker.

Scientists and “Being Smart”, part 1: Relating to “Normal”

The always wonderful Chad Orzel has just written a new book and New York magazine published a fascinating excerpt that’s been resonating a lot with my friends, science-minded and non-science-minded alike. Orzel relates how people often tell him that he’s so smart when they learn that he’s a physicist. While it is incredibly flattering to have other people say you must be smart, Orzel points out it comes with an unacknowledged downside:

There’s a distracting effect to being called “really smart” in this sense — it sets scientists off as people who think in a way that’s qualitatively different from “normal” people. We’re set off even from other highly educated academics — my faculty colleagues in arts, literature, and social science don’t hear that same “You must be really smart” despite the fact that they’ve generally spent at least as much time acquiring academic credentials as I have. The sort of scholarship they do is seen as just an extension of normal activities, whereas science is seen as alien and incomprehensible.

A bigger problem with this awkward compliment, though, is that it’s just not true. Scientists are not that smart — we don’t think in a wholly different manner than ordinary people do. What makes a professional scientist is not a supercharged brain with more processing power, but a collection of subtle differences in skills and inclinations. We’re slightly better at doing the sort of things that professional scientists do on a daily basis — I’m better with math than the average person — but more importantly, we enjoy those activities and so spend time honing those skills, making the differences appear even greater.

A friend in law school argued that this can be a benefit: people seem to have fewer uninformed opinions that they’re compelled to share regarding fluid dynamics than philosophy. I think that’s true to a point, though people also have lots of uninformed opinions on issues that are more controversial, like GMOs, ecology, and climate science. What I think is useful to consider is the nature of how all these fields relate to their people’s lives. People use the end results (whether that’s a tangible product or knowledge) of scientific and engineering research, but they don’t need an understanding of those systems to be able to use their products. People are in sociological, cultural, and political systems everyday and so they have at least a folk or commonsense understanding of how those things work, and so they react when people in these fields tell them their knowledge is incomplete, if not often wrong.

But you also see this a bit in misunderstanding of science: part of the reason people have strong opinions on things like food, ecosystems, and the climate is that they also interact with those systems everyday, and so they have a folk understanding of those too. The discrepancy between someone’s folk understanding and that of a scientific observer is why we have the”this winter is cold so global warfming is a myth” meme. There’s a reason this meme is so resonant to some, though. It’s common in science communication to just treat non-scientists as empty buckets waiting to be filled with scientific information that they’ll appreciate (or maybe even “f***ing love” it), and to assume the major limit in public understanding of science and scientific issues is that they just don’t know enough. This is called the deficit model, and what’s key to know about it is that it is typically wrong. It’s true that a random non-scientist won’t know as much about a given scientific field as someone actually working it. (There is a good chance they know something about it and you should engage that, though). What’s really important, though, is that people don’t engage with science in a vacuum. Everyone brings their own baggage, in the forms of folk knowledge, cultural assumptions, moral values, and more. Scientists, and science communicators more broadly, need to engage with those issues beyond just pure scientific knowledge to truly engage with the public, otherwise people think you’re treating them like idiots.

It’s also generally more interesting to approach science communication this way. Sure, I like informing people of the latest trends and results from research (and studies show people are interested in science news) or other neat concepts that come up in my work and as someone in the field, I’m more aware of this information. But I’m not going to have an equal back and forth about the topic of my old research with most people at UVA, except for the dozen or so people working on similar projects, because it took several years to get to that point. And that can be fine! I know I listen to law students, history majors, sociology majors, philosophy majors, policy students, biologists, and more have incredibly deep conversations on their areas of expertise all the time and learn a lot just from listening.

Even without finding someone who studies the sociology of science and technology, though, I can probably have an interesting conversation with almost anyone about social or ethical implications/questions related to my work. When I did work on the CO2 converstion project, lots of people right away grasped at the implications for climate change work. And that’s the kind of conversation that’s probably most helpful in grounding science and engineering into normalcy.

This is also why I tend to hate saying I “dumb down” things if I’m talking to people outside of my fields. There’s 11 schools at UVA and over 100 academic programs and I know people in all those are “really smart” and they are also all smarter than me in several things. (And of course, as Orzel points, this extends way beyond other people in academia or even college graduates; it’s just that my life is still mostly school.) The reason I change how I talk isn’t because these other people outside my lab and department are dumb; it’s to acknowledge that they all have expertise in different areas than I do and I want to share some of my expertise with them (without forcing them to also have all my training) in a way they can appreciate. Meeting people where they are is generally just good practice and science communication is no exception.

That Science Survey is More Complicated Than You Think… and it Has Some Good News

The Web has been abuzz with the results of the National Science Foundation’s latest Science and Engineering Indicators report. In particular, people are freaking out over the “Public Knowledge about S&T [Science and Technology]” section that goes over the results of a survey that looks at the Americans’ knowledge of science and technology as well as their perceptions on scientific and technological issues. One of the most popular headlines points out that 26% of American think the Sun goes around the Earth. And that’s… bad. There’s not a really good defense of that.  (Though consider that America had a school dropout rate of over 10% through the 90s to the early 2000s, so that probably explains  a good hunk of that.)

It’s also pointed out that less than half of respondents knew that human beings evolved from earlier animals.  But if the question is rephrased to say “according to the theory of evolution, human beings, as we know them today, developed from earlier species of animals” (emphasis added), 72 percent of respondents answer true. Rephrasing also greatly changes the nature of responses to the Big Bang question. Only 39% answer true to “the universe began with a huge explosion” , but 60% say true to the statement “according to astronomers, the universe began with a huge explosion.” (It’s also worth pointing out that astronomers really wouldn’t call the Big Bang an explosion if they’re being technical.) So yeah, I don’t get why people don’t want to “believe” the science, but I’d give them credit for being familiar with the scientific theory.

I’m also surprised that there isn’t much criticism of the questions being asked. Science education reformers nearly always complain that current science education is too focused on memorization and not being able to apply the scientific process. But nearly all these questions are basically checking to see if a person knows the relevant fact to answer the question. I think the questions are fine, though, as I think they do reflect science literacy. And I tend to think science literacy is more important to the average person.

The other thing most people don’t mention is the comparison between Americans’ performance on the test and that of people in other countries (China, the EU, India, Japan, Malaysia, Russia, and South Korea) doing similar surveys. The EU average actually was even worse than the US on the heliocentrism question (only 66% knew the Earth went around the Sun). The US had the most correct responses to the question about whether all radioactivity is man-made (the correct answer is false). And we were the only country where a majority of respondents knew that electrons were smaller than atoms and that antibiotics cannot kill viruses. As a random, but interesting, aside, Japan was the worst of the rich countries in understanding that the father’s gene determines the sex of a baby and it makes me wonder if the “eating lots of meat while pregnant means you’ll have a boy” myth referenced in some anime really is super common in Japan.

Finally, there is some good news. Even if American’s don’t ace the science literacy questions, they do care about science. Over 70% of Americans say the benefits of scientific research outweigh the harms, and about another 20% say the harms and benefits are about equal. Only Canada, Denmark, Finland, and Norway had more people than the US disagree with the statement that “modern science does more harm than good”. Over 80% of Americans think the federal government should fund basic scientific research, and a third of Americans thought we need to increase science funding. The scientific community is nearly as trusted as the military, just shy of 90% confidence by members of the public. Nearly 90% of Americans think scientists and engineers work for the good of humanity and most disagree with the idea that scientists and engineers are odd or have narrow interests. So even if they might not have the best understanding of science and technology right now, I’m hopeful about Americans in the future. But the narrow reporting on this survey may not help.

Why I Love Agents of SHIELD

So I finally caught up on TV with a post-finals DVR binge and watched the last three episodes of Agents of SHIELD for the fall. And I still love it. I’ve always loved it. Evidently this puts me in a minority on the Internet.

Penny Arcade, why?

Talking to a friend, I realized I love it so much because of one thing: FitzSimmons. Or more accurately, two things: Jemma Simmons and Leo Fitz. (Although others think they are basically one character.)

Why do I love the two characters that other people view ambivalently? Because they’re scientists. Okay, technically Fitz is an engineer, but their roles are very similar in the show. And both scientists and engineers end up using the scientific method, and in their work, Fitz and Simmons use a lot of technology. With Fitz and Simmons, Agents of SHIELD shows science and technology as forces for good, and that’s something we haven’t seen much on TV shows lately. I’m particularly excited by the fact that they’re actually full characters in the show, not recurring lab rats who just dump tech on the protagonists as needed like Q does in James Bond (or Marshall from ALIAS). Also, the things they talk about typically make some kind of sense (I’ve only heard the term “pure energy” once, but “gravitonium” makes no sense whatsoever).

What strikes me as particularly important is that they’re ethical scientists. I realize this sounds like an incredibly low bar, but seriously, this isn’t something we’ve seen on major TV shows lately. People (especially children) tend to be scared of the people they see working in science-related fields on TV. And honestly, I can’t blame them. The entire backstory of LOST and Heroes seemed to be related to mysterious mad science. It is incredibly important to me that Fitz and Simmons comment on how unethical Project Centipede is, and that in the pilot, they angsted over the uncertainty of whether or not they could help Michael Peterson without hurting him.  In the third episode, their favorite professor calls out the villain of the week for hypocrisy in his technological development.

Pretty big spoilers below the jump, if you haven’t been watching the episodes after the winter break.

Continue reading