Fred Trotter

Healthcare Data Journalist

Conferences, Health Datapalooza, Twitter, Values

Blue Sky vs Twitter at Datapalooza

X and Twitter vs Blue Sky

Health Datapalooza is my favorite conference, focusing on healthcare data from U.S. Federal Government, other countries, and states. It fosters a vibrant community leveraging these datasets for meaningful work. As a volunteer streamer for the conference, I’m working to make some content accessible to patients, clinicians, researchers, and healthcare data enthusiasts worldwide.

Previously, I’ve successfully used Twitter to make Datapalooza more welcoming and inclusive to patients who cannot be there in person, especially for patients unable to travel. The conference aligns closely with my role as a healthcare data journalist, bridging healthcare, technology, and policy. It’s a key meeting ground for patients concerned about digital civil rights, with active participation from the Light Collective—a group Andrea Downing and I formed due to concerns about Facebook’s data practices. While Twitter has been better than Facebook, recent decisions by Elon Musk have made it less patient-friendly. 

I think Twitter is probably still better than Facebook in this regard, but Elon Musk has definitely made decisions that make Twitter worse for patients.

Elon Musk portrays himself as a free speech absolutist, and as a journalist, I can embrace that stance. While free expression and dissent are critical, there are forms of communication that have long required government oversight and intervention. The classic example is shouting “Fire” in a crowded theater. Whenever possible, limitations on free expression should take the form of “consequences for speech” rather than the suppression of speech. The “freedom of the press” is a unique freedom, because it is for those who own “presses”. Presses, and platforms like Twitter and Facebook allow people to have freedom of expression and rights that in previous decades and centuries were only available to a handful of people with publishing power. 

These platforms have always felt that it was their users who should bear all of the consequences of speech. And perhaps, when Twitter was just a simple “timeline merger” for everyone you followed.. One could argue that they were -just- a publishing platform. But now, there is no social media platform that is not using AI to decide what content to show people. Deciding which content gets seen and what does not is at the heart of the journalistic and editorial processes. The fact that these platforms regularly pretend otherwise is an insult to everyone’s intelligence. 

So what is the difference between censorship and moderation? Moderation should be simply the immediate consequences for already-established boundaries and limitations on public speech established long ago. In my view, the line should be clear when it comes to explicit or implied threats of physical violence. However, Twitter, in particular, has been inconsistent in enforcing this boundary, specifically choosing not to moderate implicit threats of violence, and to reinstate previously banned accounts that had been banned for making explicit threats. These accounts continue to flirt with threats of violence. In many cases people are implicitly threatened, and Twitter has turned a blind eye. Then there’s the issue of “fighting words” – language that is not a threat of violence in itself, but has historically been seen as crossing a line into instigating violence.

For me, the balance between moderation (which I view as a positive) and censorship (which I view as harmful) is incredibly difficult. Real moderation is an incredibly difficult job. Reasonable people can disagree on what approach is right. 

However, Twitter seems not invested in maintaining a thoughtful balance between the two. 

When you consider Elon Musk’s stance as a self-proclaimed free speech absolutist, it becomes essential to examine Twitter’s moderation standards under his leadership. For instance, Twitter’s policies are supposed to prevent the spread of misleading AI-generated videos that impersonate public figures. However, Elon himself has promotes such content when it suits his interests. At the same time, he has taken legal action against individuals whose speech or research on Twitter offends him. Most recently, he has made statements that should qualify as a vague threat of sexual assault

Ironically, Elon has engaged in what appears to be classic libel, particularly when he infamously accused someone of being a pedophile with no supporting evidence – an accusation that had devastating consequences for the individual involved. With Elon’s massive platform, there’s no real way for a private citizen to counter such claims with speech of their own on an equal scale. That’s precisely why libel and slander laws exist: to protect those without access to a “printing press” from the unchecked power of those who do.

Elon seems very comfortable defending his own rights not to be slandered, but he doesn’t extend the same care to others. This behavior violates the central redeeming tenet of free speech absolutist position: while you may have the right to publish whatever you want, but you are not shielded from the consequences of your words. Elon, however, seems more than willing to avoid responsibility for his speech while ensuring others face consequences for speech he disapproves of. Musk is reputed to regularly interfere with specific moderation events on Twitter, including the removal of Community Notes when he regards these as problematic. Generally, I am enthusiastic about how well the Community Notes feature has worked on Twitter, and I am sorry to see it undermined in this fashion. It is worthwhile to understand how the Community Notes algorithm is not merely “Majority Rules” but based on algorithmically calculated consensus between divergent viewpoints

Given all of this, I find myself questioning whether it’s ethical to remain active on Twitter in the long term. Yet, I’ve invested more time and energy into Twitter than any other platform. I have nearly 8 thousand followers, and while there are plenty of people with millions of followers, my followers tend to be members of the clinical or patient community. They are not merely an audience for my ideas, but a sensor network that helps me to stay informed about changes in health tech and data policy. These connections are vital for my ongoing work as a journalist, and it is my hope that I help other people with their vocations in the same way. 

I’ve appreciated Twitter’s pseudonymous privacy model, which I’ve always found superior to Facebook and other social networks. And, of all platforms, Twitter has consistently provided the best space for intellectual discussion. So, this leaves me asking: what should I, or others in a similar position, do? 

Recently, Elon Musk chose to pick a fight with the government of Brazil. While Brazil’s government has problems (not sure I know of a government that does not have problems), instead of engaging in dialogue or using legal means to resolve these disputes, Elon has chosen a more combative approach. As a result, all the courts in Brazil have mandated that Brazilian users have been cut off from Twitter. Including many who rely on the platform for their livelihoods, putting them in a desperate situation. In response, it appears that much of the country is gradually migrating to platforms like BlueSky, Mastodon and Facebook’s Threads.

BlueSky, similar to Mastodon, is a peer-to-peer alternative to Twitter, offering a decentralized microblogging experience. However, BlueSky and Mastodon approach moderation in fundamentally different ways. Mastodon relies on server-level moderation, which has its own challenges, as it outsources the task of moderation to individual server administrators. BlueSky, on the other hand, focuses on giving users robust tools for self-moderation, such as blocking mechanisms, which seems like a more user-centric approach.

While I don’t harbor any illusions that either platform will be completely free from problematic content—no social network ever is. But while there are well documented problems with moderation on both Bluesky and Mastadon, both platforms are treating the problem as moderation as something to be solved by and with the community, make it a priority and are at least attempting to act with consistency and in good faith.  I am not confident that the same can be said for X/Twitter or Threads/Facebook. 

These new peer-moderated platforms are focused on addressing the worst types of speech, akin to the well-known “shouting fire in a theater” or slander, without overstepping into excessive single-sided censorship. In that sense, they seem like healthier environments for fostering online communities.

However, the challenge remains: my entire network and following are on Twitter. Should I try and convince people to migrate? How do I continue to engage in events like Datapalooza, which rely heavily on Twitter, while wanting to move away from the platform? I do not want to abandon my community on Twitter, even while I feel strongly that it is time to move on from the platform. I want to find a plan that both gives Twitter/X the opportunity to improve itself, but also gives me and my community opportunities to look elsewhere if it does not improve. 

I believe many of us are facing this same dilemma. So, for those of us working in health technology policy—or the broader clinical fields that intersect with my work—I propose that we begin to collectively diversify to BlueSky. This should not just be a matter of getting a user account on a new platform, but also help to get moving towards improved moderation and towards replacing the fundamental value that the patient and clinical communities were getting from Twitter. The rest of this blog post will be devoted to mechanisms to handle this “foot in both worlds” approach. 

The first tool I’ll be using in this process is an app called Yup (available at https://yup.io/), which allows for cross-posting between Twitter and BlueSky. It’s a bit clunky and has some cryptocurrency stuff that I’m not thrilled about, but it serves the purpose of posting microblogs simultaneously on both platforms. There is a limit on the number of Tweets that Yup can post in a day, however, so I will also be looking at Buffer, which is a paid service.. So it should not have that problem. This will allow me to keep a foot in both worlds for now. If I figure out how to merge the reading streams (Yup/Buffer are mostly good for posting), I might write more later on.

At Datapalooza, I’ll be encouraging others to take similar steps. My hope is that we can gradually build momentum and move our healthcare data community onto BlueSky. By doing so, we can preserve the connections we’ve built on Twitter without being forced to participate in a platform that has taken a deeply problematic stance on moderation. This way, we can honor the conversations and relationships we’ve had while fostering a healthier and more principled online environment for the future.

Twitter has long been a powerful platform for connecting diverse communities within healthcare, especially in the patient community. It has allowed patients to interact as equals with healthcare researchers, data experts, policymakers, clinicians, and other key decision-makers in the healthcare world—people who don’t always have direct connections to the patient experience. Personally, Twitter introduced me to the patient community, and without the friendships and relationships I built there, I might have remained just another programmer.

Movements like the Walking Gallery, led by Regina Holliday, and countless other patient-centered initiatives have been deeply rooted in Twitter. But now, many of these movements are searching for a new home. A major concern for me is that the type of content Elon’s moderation policies allow is often harmful to vulnerable populations, including patients. I’ve seen this firsthand, particularly in how transgender, mental health and long-COVID patients, among others, are treated on the platform. Many of the things now labeled as acceptable on Twitter (NSFW Link) can directly threaten patient safety—not just in the “hospital” sense, but in terms of preventable harm in the broader healthcare landscape.

When patients are subjected to bullying, doxxing, and cyberattacks on a platform like Twitter, it flies in the face of patient safety principles. There has already been significant documentation of the ways in which Facebook has similarly failed to protect patient communities. It’s time to find platforms that are safer and more respectful to patients for the kinds of interactions and advocacy work that Twitter once fostered. As I expected, Andrea Downing and The Light Collective are already on Blue Sky! I have made a “Starter Pack” to ensure that healthcare policy/data/tech/privacy enthusiasts have a quickstart to solid content on Blue Sky: 

That’s why I’m making this change. By having one foot in both the Twitter world and the BlueSky (or even Mastodon) world, I’m hoping to navigate this transition thoughtfully. I would love to hear from people who are doing similar work on Mastodon to find out how they’re managing this shift. In the end, we need new homes where moderation can be conducted in a way that protects and respects patients, rather than putting them in harm’s way.

If you know of other Blue Sky accounts, including your own, that you would like listed in these starter packs, let me know! I am looking for accounts that are actively engaged in serious or snarky discussions about healthcare. Preferably with a sense of humor that keeps things below PG-13 when possible. 

So, I’ll see you on BlueSky, and thanks for reading my blog.