3 Ways the Pandemic Has Made the World Better

COVID-19 has inflicted devastating losses. It has also delivered certain blessings.

Letters in a genetic sequence
Sebastian Gollnow / Uwe Anspach / picture alliance / Getty / The Atlantic

This has been a year of terrible loss. People have lost loved ones to the pandemic. Many have gotten sick, and some are still suffering. Children have lost a year of school. Millions have lost a steady paycheck. Some have lost small businesses that they’d built for decades. Almost all of us have lost hugs and visits and travel and the joy of gathering together at a favorite restaurant and more.

And yet, this year has also taught us much. Strange as it may sound, the coronavirus pandemic has delivered blessings, and it does not diminish our ongoing suffering to acknowledge them. In fact, recognizing them increases the chance that our society may emerge from this ordeal more capable, more agile, and more prepared for the future.

Here are three ways the world has changed for the better during this awful year.

1. We Now Know How to Code for Our Vaccines

Perhaps the development that will have the most profound implications for future generations is the incredible advances in synthetic messenger RNA (mRNA) biotechnologies.

We got our vaccines very fast—the previous record for vaccine development was four years, and that was set in the 1960s. This time, we developed multiple good COVID-19 vaccines in less than a year. Luck bought us some of that speed. For example, the HIV retrovirus is notoriously difficult to vaccinate against, and we still don’t have a vaccine for it. COVID-19 was much more susceptible, and billions of dollars in public money and a global sense of urgency pushed things along. Tragedy also sped things up: Because the pandemic was raging—more cases to test against—it was easier to get results from vaccine trials.

But amid all this came historic developments. The new mRNA technology, on which several vaccines—notably Pfizer-BioNTech’s and Moderna’s—are based, is an epochal scientific and technical breakthrough. We are now coding for vaccines, and thanks to advances in science and industrial production, we can mass-produce them and figure out how to deliver them into our cells in a matter of months.

This is all new. Neither Moderna nor BioNTech had a single approved product on the market before 2020. Each company essentially designed its vaccine on a computer over a weekend in January 2020—BioNTech’s took just a few hours, really. Both companies had vaccine candidates designed at least four weeks before the first confirmed U.S. COVID-19 fatality was announced, and Moderna was producing vaccine batches to be used for its trials more than a month before the World Health Organization declared a pandemic. In 2021, the companies together aim to produce billions of stunningly efficacious vaccine doses,

We know the principle behind vaccination: Once our immune system encounters a virus, it can learn how to fight it and remember how to do it the next time. Vaccines give our immune system the practice it needs, but it’s deliberately structured as an unfair fight. Until now, most of our current vaccines have been either weakened or completely deactivated viruses or, more recently, protein subunits: just a few fragments from the virus, called antigens. We’ve achieved extraordinary levels of effectiveness and safety with these techniques, but they still have downsides. In 1955, a botched batch of weakened polio vaccine killed 10 children and caused paralysis in hundreds. We’ve since made sure to never repeat that tragic failure, but producing vaccines from the actual pathogen still means handling the virus in the manufacturing process. The newer subunit vaccines hold a lot of promise, but come with their own challenges. Identifying the right subunit (or antigen) can be difficult, and these vaccines tend to produce weaker immune responses. Plus, it’s not like antigens are hanging out on a supermarket shelf. We grow them in cell systems like yeast or in E. coli—essentially hijacking their genetics to produce the antigens we want, and then harvesting the yield. It works, but it’s slower than the mRNA process.

The mRNA vaccines work differently. For these, scientists look at the genetic sequence of a virus, identify a crucial part—such as the spike protein, which it uses as a key to bind onto cells’ receptors in order to unlock and enter them—produce instructions to make just that part, and then send those instructions into our cells. After all, that’s what a virus does: It takes over our cells’ machinery to make more of itself. Except in this case, we instruct our cells to make only the spike portion to give our immune system practice with something that cannot infect us—the rest of the virus isn’t there!

Until this year, that was the dream behind the synthetic mRNA technologies: a dream with few, scattered adherents, uphill battles, and nothing to show for it but promise. This year, it became a reality.

Our cells have a remarkable kind of software—wetware—that uses the instructions in the DNA in our cells’ nuclei to produce proteins. If you imagine the assembled proteins as a Lego structure, the DNA is like the instruction booklet. But someone has to look at those instructions and put the blocks together in the right way. In the cell, a key part of this process is the messenger RNA: a short-lived, single-strand molecule that carries the instructions from the DNA in the nucleus to the protein-making factory outside it.

In 2020, we figured out how to make messenger RNA with precision, by programming the exact code we wanted, producing it at scale (a printing press for messenger RNA!), and figuring out a way to inject it into people so the fragile mRNA makes it into our cells. The first step was pure programming: Uğur Şahin, the CEO of BioNTech, sat at his computer and entered the genetic code of the spike protein of the mysterious virus that had emerged in Wuhan. Moderna employees had done the same thing the weekend after the genomic sequence was released on January 10. The Moderna vaccine candidate was called mRNA-1273 because it encoded all of the 1,273 amino acids in the SARS-CoV-2 spike protein—the code was so small that it could all be represented with little less than half the number of characters that fit on a single-spaced page.

The rest of the process relied on key scientific and industrial innovations that are quite recent. Messenger RNA are fragile—they disintegrate easily, as they are supposed to. The lipid nanoparticles we envelop them in to use as delivery systems were approved only in 2018. Plus, the viral spike protein is a notorious shape-shifter. It takes one form before it fuses with our cells and another one afterward. The latter, postfusion form did not work well at all for developing vaccines, and scientists only recently figured out how to stabilize a virus’ spike in its prefusion form.

Now that this process is in place, a host of possibilities have opened up. We may soon have vaccines for many other diseases that have eluded our grasp. Efforts are already under way, for example, for an mRNA vaccine for malaria—a parasite that each year kills hundreds of thousands of people, mostly children, and is notoriously hard to vaccinate against.

We may also finally get a new set of tools to better fight cancer. (Both Moderna and BioNTech were working on cancer treatments before pivoting to coronavirus vaccines.) The challenge with cancer is that it is our own cells gone awry. It is really difficult to find a way to kill just a patient’s cancer cells thoroughly without also killing healthy cells—and thus the patient. But synthetic mRNA can be encoded with only the specific mutation in one patient’s cancer cells—and if the cancer cells further mutate, those can be targeted as well.

This may allow us, finally, to transition from a broadcast-only model of medicine, wherein drugs are meant to be identical for everyone in a particular group, to targeted, individualized therapies. Plus, these technologies are suitable for small-scale but cheap-enough production: a development that can help us treat rare diseases that afflict only a few thousand people each year, and are thus usually ignored by mass-market-oriented medical technologies.

It’s also no coincidence that these two mRNA vaccines were the fastest to market. They can be manufactured rapidly and, crucially, updated blazingly fast. Şahin, the BioNTech CEO, estimates that six weeks is enough time for the company to start producing new boosters for whenever a new COVID-19 variant emerges. Pfizer and Moderna are both already working on boosters that better target the new variants we’ve seen so far, and the FDA has said it can approve these tweaks quickly.

2. We Actually Learned How to Use Our Digital Infrastructure

The internet, widespread digital connectivity, our many apps—it's easy to forget how new most of this is. Zoom, the ubiquitous video service that became synonymous with pandemic work, and that so many of us are understandably a little sick of, is less than 10 years old. Same with the kind of broadband access that allowed billions to stream entertainment at home and keep in touch with family members and colleagues. Internet connectivity is far from perfect or equally distributed, but it has gotten faster and more expansive over the past decade; without it, the pandemic would have been much more miserable and costly.

Technology also showed how we could make our society function better in normal times.

Consider, for example, the advent of telehealth during the pandemic. Last summer, while a few hours away from home, I developed the same debilitating neck pain that I had experienced once before, about five years ago, on a different trip. It was instantly recognizable: sharp, relentless pain that radiated from where my neck joined my left shoulder; even a slight movement felt as if an army of tiny, poisonous spears were hitting that area.

The previous time, I was told nothing could be done before I could see my doctor in person, many days later. Not so now: My doctor and I connected immediately through a new patient portal, which had a videochat option that had become available because of the pandemic. I described the problem and demonstrated my limited range of motion. He signed off by saying he’d send a prescription for oral corticosteroids to a nearby pharmacy. Just an hour later, and less than a full day after the onset of my symptoms, I was sitting in my car in the pharmacy’s parking lot, staring at the box of medicine in wonder. Previously, I had suffered through severe pain for multiple days, to the degree that I had started hallucinating from lack of sleep. This time, relief was right there in my hand.

According to the CDC, telehealth visits increased by 50 percent in the first quarter of 2020, compared with the same period in 2019. Such visits are clearly not appropriate for every condition, but when warranted, they can make it much easier for people to access medical help without worrying about transportation, child care, or excessive time away from work. Remote access to medical help has long been a request from people with disabilities and people in rural areas, for whom traveling to clinics can be an extra burden.

Work, too, has been transformed. Suddenly, hundreds of millions of people around the world had to figure out how to get things done without going into the office. It turns out that for many white-collar jobs, this is not just possible; it comes with a variety of upsides.

Commutes, to take one example, are unhealthy—they waste time and potentially increase our sedentary time, which is associated with many adverse health outcomes, and perhaps worst of all, driving is among the most dangerous activities we undertake each day. The competition to try to avoid long commutes distorts property values and can worsen inequality, as those with money pay extra to live near centers of work, while other residents can no longer afford to live there.

Unsurprisingly, many of my luckier friends—those able to work from home and who did not suffer directly from COVID-19—have been whispering about how much better their lives have gotten without commutes and with more flexibility.

Many events have become a lot more inclusive too. Throughout the past year, I’ve been able to attend conferences and talks I’d otherwise have no chance to participate in without extensive time and travel costs. I’ve also given talks during which I’ve interacted with folks from around the world, who might never have been in that “room” otherwise. And I’ve noticed that a broader range of experts can appear on TV, now that we’ve normalized calling in from one’s home office, living room, or even bedroom. In a world divided by visas, income inequalities, time constraints, and opportunity, why didn’t we just incorporate videoconferencing into more of our events before? Why didn’t we take questions from the audience not in the room? We should keep doing that after the pandemic as well.

I certainly miss some of the serendipitous conversations that conferences and other in-person events provided: not just during the talks, but in the corridors, or at breakfast before a panel. And it’s true, such events are a form of livelihood for many, and I’m not advocating for eliminating that income. It’s also not that we should never go back to the office, nor ignore all the issues that can stem from working outside of the office—especially the threat to work-life balance. Being in the same office also allows for conversations that go beyond strict work discussions, and the connections they foster. We might never be able to fully replicate those positives digitally, but we should still provide some remote access to those who would otherwise be completely left out.

3. We’ve Unleashed the True Spirit of Peer Review and Open Science

On January 10, 2020, an Australian virologist, Edward Holmes, published a modest tweet: “All, an initial genome sequence of the coronavirus associated with the Wuhan outbreak is now available at Virological.org here.” A microbiologist responded with “And so it begins!” and added a GIF of planes taking off. And so it did indeed begin: a remarkable year of open, rapid, collaborative, dynamic—and, yes, messy—scientific activity, which included ways of collaborating that would have been unthinkable even a few decades ago.

Holmes was announcing that a scientist in China, Zhang Yongzhen, had rushed to sequence the genome of the mystery virus from Wuhan—his team had worked practically nonstop, completing the sequencing a mere 40 hours after a virus sample had arrived in a box of dry ice at his Shanghai office. Without waiting for approval or official permission, Zhang also promptly shared the result with a consortium of researchers in Australia, giving them the go-ahead to post it online in an open depository.

Peer review—review by one’s fellow scientists—remains the cornerstone of the scientific process, and rightly so: Good science happens when members of a community dedicated to advancing our knowledge can examine findings, replicate results, test theories, and challenge one another.

However, peer review as a formal process—as it happens right now—is different from the idea and spirit of peer view. We have “peer reviewed” scientific journals in which scientists can publish their findings. But in a hard-to-believe-but-true twist, many of those journals—especially the highly prestigious ones that can help a scientist’s career—are privately owned by for-profit companies, even though the peer reviews are done for free, on a volunteer basis, on articles that are submitted by scientists who also don’t get paid by the journals.

Worse, after they go through the formal process at these for-profit journals, these papers are then put behind paywalls, meaning these companies then charge outrageous sums to academic libraries in universities—whose scientists have freely contributed the paper and the peer review. The companies block the general public from accessing them too, unless they also pay for them. These companies will even charge scientists for the privilege of making these papers “open access”—again, papers written by the very scientists who receive no financial benefit from charging the public!

It’s little wonder that these companies remain highly profitable while many academics are up in arms over this terrible process that impedes the dissemination of science! Unfortunately, scientists—especially those who are early in their career—feel compelled to keep participating in this system, because getting published is the coin of the realm for hiring, promotions, and prestige.

Well, no more. When the pandemic hit, it simply wasn’t tenable to keep playing the old, slow, closed game, and the scientific community let loose. Peer review—the real thing, not just the formal version locked up by for-profit companies—broke out of its constraints. A good deal of the research community started publishing its findings as “preprints”—basically, papers before they get approved by formal publications—placing them in nonprofit scientific depositories that had no paywalls. The preprints were then fiercely and openly debated—often on social media, which is not necessarily the ideal place for it, but that’s what we had. Sometimes, the release of data was even faster: Some of the most important initial data about the immune response to the worrisome U.K. variant came from a Twitter thread by a tired but generous researcher in Texas. It showed true scientific spirit: The researcher’s lab was eschewing the prestige of being first to publish results in a manuscript by allowing others to get to work as fast as possible. The papers often also went through the formal peer review as well, eventually getting published in a journal, but the pandemic has forced many of these companies to drop their paywalls—besides, the preprints on which the final papers are based remain available to everyone.

Working together, too, has expanded in ways that were hard to imagine without the new digital tools that allow for rapid sharing and collaboration, and also the sense of urgency that broke through disciplinary silos.

For example, in early 2020, after I started writing about the necessity of wearing a mask, it became clear that we also needed detailed scientific articles looking at the science of the efficacy of masks for dampening community transmission. The questions the topic touched on involved many disciplines, including infectious diseases, aerosol science, and sociology. So I teamed up with a group of scientists, doctors, researchers, and data analysts across the globe to co-write an academic paper, and from start to finish, it was like nothing I had done before. A lot of scientific work involves international teams, but this time we had assembled practically on the fly: the co-authors lived in cities as varied as Cape Town, South Africa; Beijing, China; Chapel Hill, North Carolina (me!); Stanford, California; and Oxford, England. We would eventually publish in the most highly cited scientific outlet in the world, the Proceedings of the National Academy of Sciences of the United States of America, which is more than 100 years old. Most of the tools we used, however—shared editing of scientific papers, videochat and other forms of meetings—weren’t widely available or as easy to use even just a few years ago.

Like many others, we didn’t wait for formal peer review to end before sharing our findings. We quickly put our paper onto a preprint server so that it could receive both open peer review from the scientific community and questions and comments from other relevant stakeholders, including policy makers and even ordinary people trying to puzzle through a confusing time. And feedback came in quickly: We received thoughtful and lengthy emails and Twitter corrections and comments, which were extremely useful—as well as much less useful contributions, which sometimes involved random people getting mad at us. I started categorizing the feedback on the sections I’d worked on, as did many of my collaborators. Even before the first round of formal peer reviews were in, we used that feedback to generate a new, stronger version, which we added to the preprint server. We then got our initial round of formal peer review—which we also found quite useful. We updated the paper again, resubmitted the new version to PNAS, and waited for a second round of peer review (which took many months, but was also very useful). Finally, about a year later: acceptance and formal publication.

I have to admit, the final published paper looks great on my CV, but our preprint had already been downloaded more than any other paper on that server. It has been cited hundreds of times, including in the highest-ranked medical and scientific journals in the world; contributed to the global scientific discourse; and played a crucial role in the adoption of mask mandates. We even had a celebratory happy hour—chatting about our lives; challenges; and new, shared friendship.

This process of open peer review is fast, dynamic, and, admittedly, messy; it’s not without its downsides. Too many sensationalist headlines have resulted from journalists rushing to write about not yet sufficiently evaluated preprints, without waiting for the process of open review and feedback to do its work. This can be confusing to the broader public. However, the explosion of preprints is sometimes portrayed as the downfall of formal peer review. It’s the opposite. No process that allows more insight into how the sausage gets made can avoid a glimpse of its less tasteful elements, but what we need to change is how we relate to science, not try to go back to the stilted, slow pre-pandemic world. We should embrace the extraordinary and robust process of open science and more peer review, as well as its dynamism, even as we establish new guardrails to contain its energy.


The pandemic happened at a moment of convergence for medical and digital technology and social dynamics, which revealed enormous positive potential for people. Nothing will erase the losses we experienced. But this awful year has nudged us toward dramatic improvements in human life, thanks to new biotechnologies, greater experience with the positive aspects of digital connectivity, and a more dynamic scientific process.

Still, let’s never do it again.

Zeynep Tufekci is a contributing writer at The Atlantic and an associate professor at the University of North Carolina. She studies the interaction between digital technology, artificial intelligence, and society.