dzuk-blockchain/info/information_freedom.md

4.6 KiB

Blocklisting and the philosophies of free information

This is not going to be the best-structured piece about this subject, but I feel it's something that is really important to address.


Blocklisting may feel counter-intuitive given the values we've been told about what the values of the internet should be.

But we need to recognise that the internet doesn't have an inherent state we can make it whatever we want it to be, and even if we do believe that freedom of information and communication is generally good (like I do), we must put a limit on some communication in order to protect it - because of the simple fact that communication can be weaponised.


The internet is not just a network of information, it is a network of people. It is social, and it is the social context that cultivates the communication in the first place. If there's nobody willing to share it, then it's not going to exist, if we let people intimidate people from doing that (which is often the case done to marginalised groups on the internet), then we are losing potentially valuable sources of insight and community strength (and generally, just cool people).

If you just generally care about humane and compassionate social spaces, this outcome is unacceptable.

But if you care about information and communication it's also unacceptable - if a social space is not moderated, not only will the loudest and most aggressive dictate what ideas are acceptable, they will also dictate what groups of people are welcome at all.


If these people are unwilling to change their ways, exclusion (of varying degrees) is probably one of the only realistic options. And for these aforementioned reasons, it is not counter-productive to altruistic philosophical higher goals of the internet to do this. It is logical and necessary in order to maintain them.

This will not probably be easy, and there will definitely be mistakes, but it is absolutely necessary to consider this possibility.


What I do not mean, as people like to interpret with this kind of thing, a heirachical approach akin to censorship.

What I mean is that exclusion should be considered as part of a multitude of approaches, researched by multiple independent groups of people (something decentralised), and communities taking different sets of actions depending on their circumstances or communities in order to cultivate their social spaces so they can be places of compassion and constructive, non-antagonistic communication.

(This is why BLOCKchain emphasises evidence and trying to explain posting behaviour, so instances can more easily assess individual instance threats if they wish.)

Ultimately, not taking community action about this issue will not solve anything.


Having and encouraging this kind of community behaviour can also have a herd immunity effect.

On big centralised social networking instances, awareness of bullying dynamics and social manipulation is low, corporate incentives keep oversight and rule enforcement low, and there is always the possibility of making a new account with the same access to potential targets as the last one. This makes the social cost of being a maliciously acting person negligible or non-existent.

However, on decentralised social networking, the more people are aware of how communication is weaponisable and of their newfound powers, and the more admins engage in taking collective action against antagonistic spaces and people, the more vulnerable people become protected (and their communication is made available to our nodes), and the social power and value of the excluded instances decreases as their access to network nodes decreases.

This (I theorise) is a rare example where shitty behaviour on the internet does have a social cost. This is really important because it creates systematic consequences and real friction against this behaviour.

This is why I think in my experience the affected instances (especially the most antagonistic ones), are typically the most angry when they are blocked - their agency to commit malicious actions, or their agency to communicate in general is limited as a measured response to their behaviour.

In other words - the naughty kid is finally put on the naughty stool.

They have limited places to go because if they try to be antagonistic on a moderated small-scale, decentralised social networking space, they will be thrown out pretty quickly, so the only other instances they can go to have a reasonable chance of also being blocked. Making a new instance is not as cheap or as easy as making a new Twitter account.

This isn't fool-proof, and it may not last forever, but it's a start.