This new list breaks off the instances of abuse, hate, etc. that an instance may into it's own file, making for much more convenient browsing (and making it much easier for me to update).
Some of these instances don't exist anymore, but will be staying on for future reference for a variety of factors. [Learn more about how dead instances are handled here.](../info/deleted_instances.md)
**All of the instance links in this document are to other documents in this GitHub detailing the evidence about that instance. They do not go to the instances themselves.**
**Unfortunately evidence links are automatically hyperlinked in GitHub, and I shall try to circumvent that in due course, so avoid clicking on those unless you actually want to.**
**Any content or topics beyond this point may be of a distressing nature. If you're not in the mood to experience that right now, I recommend coming back later.**
These instances often advertise themselves as having laissez-faire moderation or as 'not a safe space', what they really mean is that they are a safe space for a variety of violent and/or hateful speech, ideologies and shitty actions, including (but not limited to):
Some of these are pretty obvious, others, less so. (The ones that are less obvious have links, other are awaiting articles because of the ongoing migration from the old list ^^)
Conspiracy theories can be a complicated subject, and certain kinds are pretty harmless and okay, so for the sake of this list, what makes a conspiracy theory dangerous or violent is one or more of the following:
## Unwilling to moderate hate speech and violent speech
Instances that aren't overtly safe spaces for violent speech, but they do say that they want to moderate as little as possible, often with the only thing they're really willing to moderate is what's illegal for them (and also normally harrassment).
The problem with that is that leaves out fascism, sexism, racism, homophobia, and other forms of bigotry. So while they aren't necessarily engaging in it themselves, they are enabling it by letting other people do it on their servers. This means that by federating with them, you're putting you and your users at risk too.
**This isn't the same as just not having a CoC or not having a detailed CoC.**
Instances that simply have no CoC or no detailed CoC against violent speech aren't automatically on this list because an instance can still act against violent speach without one. This list is only for instances whose administrators have demonstrated that they ignore or accepted instances of violent speech, or have made it clear that they would act in this way if that were to happen.
Instances that have made it clear that they won't remove violent individuals, but at least will silence them from their end (so they don't bother other instances) are also not on this list.
## Illustrated sexualised depictions of people who appear to be minors
This isn't necessarily a moral judgement, but it is illegal in most western countries. (Notably while AFAIK this is not legal in Japan, a law was only introduced recently and it's not particularly enforced, which is where this often comes from).
**This includes *but is not limited to* lolicon and shotacon.**
**This judgement has to be done by eye, not by whatever an accompanying text says.** So if some context says the person is over 18, but they definitely *look* under 18, that is still going to be considered under this category.
With this list I am generally going by the word of mods and admins I trust unless I accidentally stumble upon it myself or it's self-evident from the instance description. I generally won't be linking to the material because it would probably be legally fraught for everyone to do so.
- **wxw.moe** (It has a loli-posting bot called AnimeGirlsBot. The instance is Chinese so I can't talk to the moderator about it but judging from their profile they are totally fine with it.)
Instances owned or used by corporations to advertise and post product updates. I wouldn't say suspend them, I just think that you may want to silence them so that only people want to see them will see them on the federated timeline.
- They are opt-out instead of opt-in, meaning it's up to individual users to have to block them or add a '#nobot' to their bio. This is frustrating to many, and an unacceptable situation in terms of privacy.
While it would be good if there are better discovery mechanisms in federated social networks, it needs to be done in a way that can mitigate abuse and does not access a person's data without their explicit consent in advance.
This may not become a serious issue, but I think it could be a really important thing to keep an eye on for the future. I think that corporate ownership generally conflicts with the idea of social spaces as an emotionally supportive environment (how many of us left Twitter because it tolerates Nazis and doesn't have any real ethical or moral positions?).
They also conflict with the idea of intermingling, spaces with a cooperative relationship to each other (ie. our instances) - corporations expect to dominate 'markets', we provide services based on what we can afford to those who want and need them. I would say that our kind of social networking is potentially an existential threat to corporate social networking, and we shouldn't let them have an inch because they will take a mile. If they ever take an interest in decentralised social networking, they will only care about us insofar that we're good PR for them.
I'm not necessarily saying block these right now (I'm not apart from Pawoo, for other reasons), but I think keeping watch would be a good idea, especially if they become a thing in the West. They currently only seem to be a thing in Japan right now.
- **Pawoo** (pawoo.net) (made and run by Pixiv Inc. [JP])
- **Pawoo Music** (made and run by Pixiv Inc. [JP])