- Desirable: Most people actually want to be on it and find some use or pleasure in using it
- Non-toxic: I also added this one because some people might enjoy being on a toxic platform, this is not what this is about
- Less exploitable: Difficult to manipulate which is increasingly important in the age of cheap LLMs, but can also be a tradeoff with desirability as barriers are erected to prevent bot manipulation / vote brigading.
Taking Hacker News as an example of a desirable, non-toxic and less exploitable social platform. I believe several attributes make it so:
- Voting: Upvoted content rises up, contributes towards desirability / non-toxicity of the content
- Strict rules / moderation: Keeps the content on topic, constructive, friendly, more pleasant to parse. Contributes towards desirability / non-toxicity and also makes it less exploitable as manipulation can be detected.
- Novelty / surprisal: This third one is somewhat special, it is less a mechanistic property of the platform but a content focus choice. It contributes towards desirability but I believe also towards lower exploitability: it is more difficult to fake novel or thought-provoking content.
Now I do realize I could have phrased the target properties differently and semantics can always be discussed ad infinitum so take the spirit of what I mean rather than the exact wording.
What I'm specifically asking HN here is what sets of rules / mechanisms could have the above as emergent properties?
Is Hacker News a desirable, non-toxic, less exploitable platform also because it focuses on novel, thought-provoking content or could there be mechanistic rules that would allow having these properties for any kind of social platform?
I'll include some possible mechanistic rules that crossed my mind that each have their flaws:
- Member verification (ID / Credit card): Less exploitable but likely very undesirable for many
- Vouching: Start with a kernel of trusted members, include only members vouched for
- Contribution limits: Members can only contribute / vote n times per day / week
- Active discussion limits: Not everyone involved in the same conversation e.g. have two people discussing a topic, have a system for others to "raise hand" to participate in the conversation
- Exposure limits: Your post can never reach more than n random people, it has to be actively reshared by someone to spread further.
Voting and novelty, they also exist with other, more problematic, platforms. I don't think simple voting really helps in maintaining the social health of a platform, a more complex system would probably more beneficial than a simple count.
But what really helps is good and fair moderation, and a suitable sized group. If it's too small, nothing much happens, if it's too big, you will be drowned in noise and grinded in too many differing opinions. And size also helps moderation.
But I don't think enforcing low limits are really helping here. It's just another simple mechanical solution, like voting. It's too much depending on the topic, thread and persons involved how big or small a limit should be. Some topics need many involved people, some people don't have always the time to pay full attention to something, but others could continue their part. Good discussions evolve naturally and also randomly, because you never know which expert is around and how much time they have on that day.
Also, you are saying social platform, but social also means meaningless chat, while it seems to aim for meaningful high quality interactions.
If you aim for high quality-discussions, then maybe it would be more feasible to improve extraction and presentation of meaningful parts. Like let humans and AI marking useful parts, AI constantly creating summaries and so on. Kinda like having the discussion on side, and a result like a "Wikipedia-article" on the other.
Moderation for sure helps, would there be ways to make it scalable with less manual supervision? Or a system that would organize people with certain rule-sets to distribute them into suitable sized groups?
I do agree with your statement that "Good discussions evolve naturally and also randomly", let's say now your platform becomes popular. It will attract players that will want to exploit that either to sway opinions for their own gain, and I believe that this is becoming increasingly cheaper to game and simulate whole crowds. So the limits are mostly with this in mind.
Indeed perhaps the term social platform is vague and different "optimal rules" could be different for social platforms that is a mega-forum, a network for friends, or just generic post sharing.
I'm wondering if there is some sort of taxonomy of these rulesets or levers that exist? Or a review paper on what has been tried and what effects they had? There are so many possible ways to structure online social interactions.
Fair. At this point, I'm not sure if X should be still called social, it's really just a mess of bots and voices.
> Moderation for sure helps, would there be ways to make it scalable with less manual supervision?
This would be the golden goose of communication. Everyone wants good automated moderation, but depending on the topic, crowd and size, it's really hard, and probably expensive, depending on the solution. The main problem is, you have to have a very good understanding of any disputed topic, to understand if something is good for the discussion, or not. And not even all human mods have this on all topics.
> let's say now your platform becomes popular. It will attract players that will want to exploit that either to sway opinions for their own gain, and I believe that this is becoming increasingly cheaper to game and simulate whole crowds. So the limits are mostly with this in mind.
Understandable. And yes, this often happens, a community grows, gains numbers and the vibe and focus is shifting in some way. It's similar to what is usually called "going mainstream" of something. Numbers influence the community, and it's hard to preserve the originals. And this is the normal social fringe. Communication is always about some level of "swaying opinions" and exploiting others for some goal.
So if I understand you correctly, you want to isolate the bad actors, and limit their impact? The question is, if you can successfully divide them from honest actors, or even good actors. Maybe a mechanical or automatic way to build up reputation, social standing and social impact might be a way. HN for example is using the karma-points to unlock certain features on certain levels. Maybe if you can build up a more detailed karma-system, which is more complex than just points, it would be possible to create a semi-automated system for healthy social interactions?
As I already said, I don't like the simple voting-systems, because it's too simple, and tend to drift into simple number-games. For example, nobody knows why something receives votes, and people tend to vote more for certain comments, which are not necessarily beneficial for the discussion. So I think a more diversified voting, with meaningful votes, would be better. On GitHub people are using emojis to communication their reaction to messages in issues, and some projects are even making use of them for certain actions. So using a set of preselected Emojis with specific positive and negative meaning, would IMHO enhance the simple voting-system and maybe allow an automated reputation-building, which then can be used by an automated modding-system.
> I'm wondering if there is some sort of taxonomy of these rulesets or levers that exist?
There is a broad set of information and knowledge in communication-science, diplomacy, psychologies, sociology, etc. But whether they can be used with a social platform is a different thing. Social platforms should be easy, simple, people want to chat and entertain themselves. If you make it too complicated, annoying, they won't participate much, and the platform will die. The biggest problem is again resources, manpower for modding, manpower to organization, time invested in using the platform..
And thinking about, there are also all kind of specialized Subreddits, which have strict rules how they communicate and for which goal. They are usually kinda good, tame and focused in their disputes.
I don't know what limits HN uses. You are green at first when it is euphemistically true. You aren't allowed to downvote at first (although restrictions always apply to direct responses?). Generally I would describe the limits as minimally invasive. I would guess the average upvote score for a comment on HN is probably something about 3?
These mechanisms are quite smart and not too invasive, but not the sole reason for HN being like this.
For your network it highly depends on what audience you want to nurture. Do you want the classic golf club where people feel superior and exclusive to others? Use vouching and ID checking.
Do you want free thinkers? Don't moderate much, but you may have to gatekeep people looking for offence (or just don't feed the trolls and ignore them).
Do you want a broad audience or enthusiasts? "Exploitability" is not only a matter of education, but it certainly helps. If that is a problem on your platform, you need to find out about the type of exploitation to counter it.
Not everyone is alike and will get along, there are different personalities having different expectations. If you cater to all, you probably won't be successful.
I cannot say what attracts people preferring "pleasant" (meaning?) discussions on the net. I probably more or less belong at the other end of that spectrum.
I wonder if it would be possible to simulate this to understand what behaviors will emerge if you set certain types of rules. It is certainly difficult to create coherent personalities with LLMs that act in realistic ways but I wonder if one could get an approximation.
Perhaps what I have in mind is also not best described as "pleasant", but also something that is net-positive for society, where as a whole society is better off having that than not. This is arguably the case for HN but not necessarily for some of the bigger ones out there.