How To Evaluate The Quality Of EDS Research Papers: A Conversation With Markus Bohn, PhD

Over the last few years, the Ehlers-Danlos syndromes have become more than just an afterthought for many researchers across the globe. However, with the growing number of academic papers published, it also got much harder for patients to evaluate the quality of those papers. Moreover, it’s incredibly challenging for people who aren’t scientists to tell apart the different types of publications – is it a review or a hypothesis? Does it present actual research data? For EDS Awareness month, writer and advocate Jan Groh (Oh TWIST) speaks with Markus-Frederik Bohn, PhD and associate professor at DTU in Denmark, to determine how patients can evaluate academic publications. Bohn shares some simple steps we all can take to understand what we are reading. 

Jan Groh: Hi Markus! Thanks so much for joining me today. First, can you clarify: are you a PhD? Are you a medical doctor?

Markus-Frederik Bohn: I have a PhD in structural virology. It’s a lot of biochemistry but with a focus on infectious diseases.

Groh: Wow! So may I call you Dr. Bohn?

Bohn: Oh, please don’t! (Laughs.) Just call me Markus. In medical circles, people are very insistent on using their titles. But here in Denmark, for example, the only person you call with a title is the Queen. (Laughs.)

Groh: Okay, very well, Markus. For our audience, I just want to introduce how this wonderful conversation got hatched and planned. As some people may know, I am a blogger in the Ehlers-Danlos space. I’ve been running a blog at the domain ohtwist.com, which stands for “Oh, That’s Why I’m So Tired!” Eleven years ago, I went from walking to having to use a wheelchair suddenly in three weeks. In January 2012, at 45 years of age, I had a massive onset and cascade of hypermobile EDS symptoms. I had been misdiagnosed with chronic fatigue and immune dysfunction syndrome (CFIDS) 30 years previously. When I discovered EDS and mast cell activation and all of the co-occurring things that I call the “Chronic Constellation,” suddenly, I understood why I’m so tired and all the factors that affect me – hence my domain name.

I’m not a trained journalist, and I’m not a scientist, although I’ve studied a little science and I was raised by an engineer. I believe I understand and know the scientific method. But I will admit that my ADHD lends me to leaping to erroneous, often premature conclusions. And then my autism makes me stick to those erroneous conclusions! 

And this happened last January when a paper came out with the title, “COVID-19 and Ehlers-Danlos Syndrome, the dangers of the spike protein of SARS-CoV-2“. I had long suspected by just observing people on Twitter that EDSers may likely be more prone to not only getting sick easier and more often but also suffer from stronger cases of COVID and then also be more prone to long COVID. So I got all excited because this paper agreed with me. They put out a hypothesis that they have noticed a very high proportion of people previously infected with SARS-CoV-2 and/or vaccinated with established EDS to have the most serious and disabling forms of long COVID.

I am quoting:

“It has thus become obvious to us that the preexistence of EDS in a person confers a high-risk factor for a very severe COVID-19 or long COVID.”

I shared this paper, but I pulled the rookie layman’s mistake of not noticing that not only was it not a study, there was no data in this paper to back their hypothesis. It was just an anecdote. I want to thank Dr. Cortney Gensemer from the Norris Lab at the Medical University of South Carolina for “slowing my roll,” and bringing me back to earth and giving a little sobriety check, and saying,

“Hey, y’all, this might sound very valid and true, but it’s lacking in strong science. And just because something has been published doesn’t mean it’s well-vetted or that it is a study.” 

I naively thought that anything that got published was, by definition, well-vetted. Like, why would it be published if it wasn’t well-vetted? Only to find out, with your help and others, that’s not always the case. 

So this is why I invited you to give us all a bit of insight into the scientific process. Would you mind introducing yourself and telling us a bit more about your experience in the scientific world and the world of scientific publishing?

Bohn: I’m Markus. I’m an associate professor at Denmark’s Technical University at the moment. I’m leading a subgroup at the Center for Antibody Technologies. We basically investigate antibodies as a technology to treat diseases. Our focus is neglected tropical diseases. 

The most important point I want to share is that there’s no clear standard for communicating science or medical information. That is a big misconception. There’s not “the” trustworthy outlet where everything that comes from it is 100% true. That doesn’t exist.

Different Journals Have Different Quality Standards

Groh: Okay, that makes me feel a little better as a layperson, but it sounds like even in the field of science, even doctors and scientists themselves can struggle to know who to trust and what is good data. To me, it sounds like it’s a never-ending process of discerning good information from bad? 

Bohn: And that’s by design. If there was just one authority, it would be normative in some way. However, that would stifle innovation, right? If there was only one authority, it would be religion. Then, we would have a dogma, and you either agree or you are out. But in science, it’s, of course, not like that. And that’s by design. The audience for all scientific papers is, first and foremost, other scientists. So we write for each other. And that allows you to put out all kinds of ideas, for example, just a hypothesis. With your COVID paper, the main purpose might have been to raise awareness. They shared what they observed. And then other scientists who might have a cohort might say, “Oh, look at that. Let’s check the data we collected and identify if we see a pattern.” It’s maybe about getting other researchers to think about the same problem and try to share your viewpoint, so they can take it and then validate it. 

Groh: Right, so it might engender a study about long COVID in EDSers. However, one critique of this paper was that they actually proceeded to give some clinical advice about vaccinations, which seemed a little bit premature in the absence of an actual study. But this paper aside, it was really eye-opening for me to get to meet you through email in our initial conversations to learn not only are not all papers actual studies, but even within studies, they are not all equal. It sounds like different journals and publications or publishers may have different biases and or goals. Some of them may be financial, and others vanity. Can you say a little more about that?

Peer-Review Processes Differ Widely

Bohn: You mentioned that in this paper, they were overstating the conclusion of their findings, which is very common. Everyone has a high opinion of their work, especially scientists. They have to. You won’t get very far in science if you’re not obsessed with what you’re doing. You need to be 100% convinced that what you’re doing is the greatest thing you could do with your life. Then you find something in your experiments that you’re very proud of, and you want to put it on the biggest stage possible, the best journal with the highest impact. Normally, at this point, there should be the peer review process, and then somebody should tell you, “Well, that is great. But maybe take out the part where you say, ‘This explains human evolution. Because that’s probably not correct.'” So then you take that out, and it’s all fine. But review processes are very different from publication to publication. 

And these papers are not put out there for free. It’s not like you publish your findings on a blog. You are looking for an (impactful) outlet that takes it, and they put work in it, and they want money for it. And that can be quite a significant amount of money. Most lay people don’t know that making a paper accessible to everyone – if it is open access but in a high-profile journal – can easily cost $10,000. 

Groh: That was eye-opening! So it also becomes an issue of privilege. If you don’t have the money and the backing to pay these large fees to get that audience, you may never rise above the noise and get into those more prestigious journals. I had no idea that there was money involved in publishing!

High-Impact Journals Reject Most Papers

Bohn: That is a very hot discussion right now. Because usually, it’s paid by your grants. So if you’re in a rather well-off country, like Denmark or the US, then the person who funds your research usually also includes the money for publishing because they’re very proud if you get it in a good place. But in resource-poor areas, for example, in many African countries or some parts of Asia, the funding you get for all of your science might not be much more than that publishing fee. And then you wouldn’t be able to afford to publish it in an expensive journal. That’s an ongoing discussion and issue. But that aside, even if you pay that money, it does not mean your work gets accepted. If the journal is very prestigious, the chance that they will take that money and your great discovery is around 8%. Because they still reject most papers. Very good journals have single-digit acceptance rates because they can pick, so they’re not dependent on your paper because there are at least ten more papers for every paper they could take. 

Groh: Okay, so there’s a lot of competition for the prestigious journals. Would something like JAMA, the Journal of American Medical Association, be one of those?

Bohn: In the medical field, (the) big names are everything with “Lancet” in front of it, or the Holy Grail is always “The New England Journal of Medicine.” And for scientists, it’s Nature, Science, Cell. These are the magic three letters, N-S-C. If you’re doing your PhD, for example, and you get your work in one of those, basically, you’re on a fast track to becoming a professor.

Some Journals Accept Almost All Research Without Hesitation

Groh: But then I imagine there are other journals that aren’t so prestigious that are probably happy to get whatever they can get?

Bohn: Yes, yes! Without calling names – but there are some nonprofit websites that actually track these journals a little bit on the side – some journals have an acceptance rate of 90%. So basically, almost everything that gets sent in there is accepted. And sometimes, we see some more unsatisfactory developments, for example, when an editor and the reviewers recommend rejection, and then journal staff overrides that decision and takes the paper anyway. But even though this is fishy, it doesn’t mean that everything that’s published in one of those journals is automatically bad. Quite the contrary, because a lot of researchers don’t check specifically if all the other literature in the journal looks correct or how it compares. So some journals might be a mix of good and not so good science. 

Groh: Right, because that would be time-consuming.

Some Journals Are Predatory Journals

Bohn: You want your findings to be out so you’re happy when your paper gets accepted. And before you know it, Jan, you’re in what’s called a “predatory” journal. Those are the ones that publish everything. I have been personally approached twice so far to, for example, “guest edit,” a special issue. Because you’re responsible for a whole issue, you would have to ask all your friends from the field to submit things so you can fill it up. So you’re soliciting papers, but in the end, they are just using you as free labor to do all the publishing. You have to be really careful sometimes before your name ends up where you don’t want it to be. Getting it back out can be really hard.

Groh: I now realize that just because something is published, it does not mean it is necessarily good science. It may be. We hope it is. But it’s essential to be discerning. What are some tips for the layperson to keep in mind as we read papers? Besides just knowing not all publications are equal, what are other details to watch for with studies? How to discern good from bad or better from not so good studies? 

Bohn: Let’s say we have found the paper we’re interested in, but we might not be familiar with the journal. We have this initial wall of skepticism up. Especially long journal titles with multiple words are always immediately suspicious. The big ones are these single-word journals, right?

Groh: Yeah, kind of like “Cher.” (Laughing).

Look For Original Data & Sample Size

Bohn: Yes, it’s like with artists. If they’re known by a single name, then they’re very big and influential. It’s basically the same thing with journals, but this doesn’t mean other journals are bad. So we pick up our favorite paper; it’s a study, and they say they saw something. So the first thing to check for is the original data in there. It’s sad having to say that, but what if the researchers didn’t generate the data themselves? It might be that there is data in there, but it’s from somebody else, which might be valid, but that already adds another layer because it means they were not in control of collecting any of the data. Ideally, they collected the data themselves. So I rather give the benefit of the doubt to people who did the actual groundwork compared to people who only have an opinion on it. That’s a starting point.

Then, of course, the size of the data matters. If you look at only four people, then the chances that you see something or not, and it’s not meaningful, is very high. For example, if you talk to your family and you realize you have something in common, it might be the same issue, but it’s impossible to know if you don’t compare it to a broader sample. So sample size is really important.

Hypotheses Should Be Backed Up By Data

Groh: So sample size is one good factor, and understanding that, for instance, you may be starting in just one family in the world, but then you broaden it out to help confirm that. What’s another potential factor to help the layperson to discern a good study?

Bohn: Yeah. So, for example, if the publication would say, “We have a family, and we found a gene that’s causing their condition.” They find a mutation in a human and then try to describe why this mutation could cause the family’s illness. That is completely valid. But at that point, it’s really hard to claim the causative connections because you don’t know if what you found actually causes that disease. Especially with a complicated disease with multiple factors involved that might be overlapping but are never quite the same between different people, it is very hard to figure out that this mutation is actually doing something. I know, especially in the EDS field, there are a lot of stories like that, but if you don’t follow up on those case reports, they simply are only “just-so” stories. 

You find something; then you form your hypothesis. For example, “we think that mutations in gene A are responsible for 80% of the people with hypermobile EDS.” Okay, good. So why does it do that? Then you set up an experiment, or you make some other sort of prediction. Generally, in the medical field, it means something with a mouse model, for example.

Different Types Of Papers: Hypothesis, Reviews, Opinions, Studies

Groh: So I’m realizing not everything is a study, first off. Some things are just hypotheses. Second off, some things are studies, but they aren’t equally good in terms of sample size. And then what other categories are there?

Bohn: There are also opinions, for example, in journals called “Current Opinion” or “Expert Opinion,” even though not everything in Current Opinion is an opinion! It makes it even more confusing! (Laughter.) A lot of publications are reviews. Reviews are where basic information gets aggregated. Some of those are fantastic because they prevent you from having to wade through a decade’s worth of original papers. Instead, you can read one very good review on it. For example, what makes a really good review is that it doesn’t have any claims that are not supported by the review; it is not an opinion piece. So if it says something about Ehlers-Danlos in a review, then there should be a little note pointing to the reference. Because everything you say needs to be backed up, ideally by multiple resources in agreement. But when there’s a controversy in the field, these are the most fun reviews to write because then you can pit the arguments against each other. I mean, as a scientist, you also like the drama. (Laughs.)

Groh: Okay, so everything starts off as an opinion, to begin with. Somebody has an idea such as “I think that EDSers may be prone to long COVID,” which, at this point, is an opinion. And so that’ll start to get studied hopefully by multiple people in multiple groups with different findings coming out. And so over time, down the road in a couple to five years, this might play out, and there’ll be all this literature, and a review might come out showing that there are these five studies that all show, for example, an increased prevalence for long COVID in EDS patients. Am I kind of on the right track? 

Watch Out For How Data Is Interpreted

Bohn: So ideally, what happens is that the opinion becomes a hypothesis so that it’s testable because we think people with EDS will be more prone to develop long COVID is not a testable statement. So you need to phrase it more concretely. A great fallacy in studies is something that I had to fix as a part-time statistician: “multiple comparison correction.” This means if you have multiple ways to score a goal, then you need to correct for that. And especially with complicated, multisystemic conditions like EDS, it’s really easy to have multiple ways to have a hit, but then, you need to divide your result literally by the ways you can make a hit. One of my pet peeves is specifically, for example, psychiatric disorders, where you have a host of symptoms, which makes it really hard to pinpoint a common cause because you don’t know if everyone under a certain umbrella, such as schizophrenia has the exact same condition with the same cause. That is why we have been very slow in finding a gene that causes, for instance, schizophrenia. And there might never be one gene that causes it because it’s so many parameters together. 

Groh: Well, that brings up a point that I call “SNP hunting,” which stands for single nucleotide polymorphism. In other words, a single-point genetic mutation. I call those the “low-hanging diagnostic fruit” of medicine because, of course, it’s absolutely wonderful when we find a single point mutation such as the gene or the SNPs for vascular EDS and kyphoscoliotic and all the rare forms of EDS have one or two, just a couple of single point mutations. And so suddenly, it becomes easy to test for. I can get your DNA in a spit sample or a blood sample, and we could go hunting for just that thing. But I strongly suspect, in the case of hEDS, there might be multiple genetic mutations and or across multiple genes, just as they have found and are finding for autism and ADHD. And we’ve been able to wrap our heads around those neurodevelopmental conditions, but we can’t do this for hypermobile EDS and the body as well. I’m not saying don’t keep SNP hunting, but I just have this funny feeling that we’re missing a bigger picture. This is just me giving an opinion. My unscientific lay opinion, you know, and then how do you go about that?

Bohn: One of my pet peeves is also the overestimation from questionnaires. For instance, when people say, “We made an online questionnaire and had 2000 people responding. Look at that! We have a big sample.” But actually, you have no idea what your sample size actually is because how many of those 2000 people are just noise? Questionnaires can be a very good starting point. However, of course, there are positive examples of how these things can turn out too. For instance, at the Broad Institute, they had the angiosarcoma awareness, a rare kind of cancer. Very few people have that. So part of the problem was just finding the people that have it. And the only way to do that was originally to reach out on social media and find people with angiosarcoma. And then, from that, they started to make a sample repository. So people can, for example, submit their medical records or a tissue sample.

Groh: Right. So you start really broad, and it might not be quite as scientific at the outset, but from that, you can then start winnowing down the data.

Everybody Has Biases

Bohn: That’s what it is in the end. And everybody has a bias, right? So how we try to get around this is by just stating it. As a patient, you probably have a bias too. You are looking for studies that represent you and your personal symptoms. However, a good study is not necessarily one where you find yourself. Ideally, it looks completely alien from the composition. That would be the ideal thing that you can’t say, “Oh, this sounds like me.” Because it’s supposed to not be about you, right? It’s supposed to be a representative sample of literally anybody who could be in there. But that does not happen too often.

Another fun thing: If a scientist goes out and tries to recruit people for a study, where do they find most of their volunteers? On campus, literally! So because most participants in studies are most likely college students because they just happen to be out there and they know you’re looking for someone for the study, they’re usually the younger, fitter, they have school degrees, there’s a lot of privilege and access to campus. 

Groh: Wow, Markus, this was so insightful. Thank you so very, very much for this. And thanks for your dedicated work in the scientific space and for helping to produce well-vetted science and research yourself!

Bohn: Thank you for the opportunity. I mean, you’re going to have to beg me to talk about science … (Laughs.)

Groh: Yeah, I might have to beg you to stop! (Laughs.)

Bohn: Oh, exactly!

Groh: That’s really great. Well, it’s been a pleasure speaking with you, and I’m glad we could have this opportunity to discuss the matter as it’s just an ongoing issue, and I want my colleagues and my cohort not to feel bad. Don’t beat yourselves up if you “fall for” bad data or a bad paper.

It’s understandable; it’s human nature. We all have confirmation bias. We all latch onto things that reinforce what we want to see and believe, but we’re doing our best to move science forward in a very–oh, there’s a word– systematic way so that we can be sure when we state something, there are facts to back that. 

You can find Jan’s Website here: 

https://ohtwist.com/

Jan Groh

May 2023

Leave a Reply