Social Media vNext

Brennan Stehling
10 min readMar 27, 2022
Howard Street in San Francisco

So you want to build a new social media service that improves on what we’ve been using all these years. What would be the essential features? How would it stay out of all of the traps which have been such a problem for other services? Why would a new service attract users to it and away from where people are now with all of their friends?

It is not an easy problem to solve. Surely there are some shortcomings of the dominant services which have millions or even billions of users.

Problems

Perhaps the root of every problem is that a service was created for a single purpose in mind and users started to use it differently. Not only was the behavior not expected, it was abusive and dangerous at times. Take YouTube, for example. It was meant to be a simple video service which would let anyone create a channel, upload videos and interact with their audience. If it was only used as intended there would be no need for users to report abuses and block other users. It has become necessary to moderate comments and videos which are required to follow community standards and not violate any laws. Instead of focusing on the features for a video service, it has been necessary to invest a great deal of time and resources into managing these problems as the service grows faster than it can keep up. Users cannot understand how the rules are applied inconsistently and when action is taken there is often no explanation. This problem is common for nearly every service and any new service will need to plan on how to minimize it from the start.

Another common problem is how services are funded. Nearly always ad revenue is what keeps a service operating. Optimizing for ad performance means that every detail about every user must be collected and exploited even when it compromises their personal safety and privacy. A new social media service should not be so dependent on ad revenue and exploiting users to benefit advertisers.

The last area I’ll cover is misinformation. This is what mostly has caused the divide in recent years and it really should be addressed. Some content is misleading simply because users are posting without knowing better and some is posted intentionally. Professional troll farms are operated with the goal of influencing many people with malicious intent. It is far beyond spreading a rumor about a classmate in school. It is impacting public opinion and used to sway elections and even justify military action. Is it enough to simply deploy an army of moderators or is there a better way?

With these problems in mind, I will cover a few features which I believe a new social media service should include to reduce or eliminate these problems.

Groups

Services like Twitter and TikTok just have a feed. You can choose to who follow. You can also follow topics or search with hashtags. You don’t really know what you’ll see next since the algorithm decides and there is no structure to organize the content. This is where groups are an improvement. On Facebook and LinkedIn you can find a group which meets your interest and join it. You can post to just that audience and interact with a smaller number of people instead of a global user base. And within a group, the admins and moderators can set and enforce rules. If a post is popular, it would just be seen by the members of that group. And if someone does post something which violates the rules set for the group a member can report it and a moderator can act on it, however is most appropriate for the group.

On TikTok, since everyone can see your posts and report them it has become common for accounts to be suspended temporarily or even banned. Mass reporting is done maliciously to take down accounts when some people don’t like what that person is posting even if it is not violating any rules or community standards. Moderators are not even familiar with the person who made the post or those who follow them. It is not really a connected a community.

When the size of a group is small it is possible to get to know the other members. Members of the group can be come moderators. Rules can also be updated for the group which does not apply to other groups. And if someone has been behaving badly moderators could put them in timeout for a period of time so they cannot post or comment. They can even ban them from the group. This user could always find another group which has different rules and standards where they my fit in better. It is not necessary to deactivate their account globally and eject them completely from the service.

Typically groups will have members who are local to the area and share a common interest, like mountain bikers in Portland. Clearly the group would be focused on that interest and have rules which disallow posting content which is off topic which will likely just create friction in the group. And if someone is looking to organize an event to ride a nearby bike path, this group would be the ideal place to find others who’d like to join. It creates a real connection both online and in the real world. And when most of the members actually know each other and spend time together on trails it builds trust.

Community Moderators

Fostering a healthy community is done by curating what is posted and influencing how people interact. When moderators are employed by the company with no involvement in the community it is very disconnected and opaque. When someone has a post reported and it is taken down without any explanation is ruins the experience.

It is far better when a group has members who have been assigned as moderators who are familiar with members of the group and have a sense of what has been going on in the group. Admins and moderators can set the tone in a group so that is attracts and retains the members they want to be a part of the group. If anyone wants the group to be different they can create another group and as admin set the rules and assign moderators. Users on the service can join and leave groups as they like. There is no risk that an unnamed moderator will misunderstand a post and suspend or ban an account because that is not how groups work.

When a community is able to manage itself it can become a place online and in the real world where people feel welcome and included.

Premium Services

When Google launched their mail service it was free. It also launched Docs, Sheets and other services and each was supported by ads. A some point Google started to offer premium services. LinkedIn has offered premium services for many years. It is a great way to give users more and cover the costs with building and maintaining the service.

A fairly new service, Clubhouse, created a new kind of premium service. It allowed for owners of rooms, similar to groups, could get paid by members who join their rooms. Owners who operate rooms would host discussions which add value for their members. It operates in what is being called the Creator Economy. Hosts plan events in their rooms and generate revenue from their audience, not from presenting ads.

Over on YouTube, one way that creators generate an income for their channel is to have subscribers pay them monthly through Patreon which is a service outside of YouTube but is often paired with channel owners on the service. YouTube has even added a Join button so that supporters can pay content creators directly and access exclusive content and perks.

By eliminating the need to only generate revenue from ads it changes the focus of the service to produce the kind of content which is best suited to the audience instead of what gets the highest ad rate. Content creators and their audience benefit the most and the service can take a margin to fund the service.

Real Identities

The concept of a Verified Account is now supported on multiple services. Twitter has granted the blue badge to users for years so that users would know an account is actually owned by a known public figure. That’s great for them, but what about everyone else?

A real identity can support more than just a social media profile. For services like Airbnb, Lyft and Turo it is helpful to have a background check and valid driver’s license. If you want to drive for Lyft or rent a car with Turo you have to upload photos of your license into their system to verify you have an ID. If instead we could link our real identity and the state DMV would let a service check our current status, it would ensure only licensed drivers are operating vehicles on this service. And for background checks, if a user has a new addition to their record and it may be welcome by many users to restrict access.

When it comes to dating apps, it would be most welcome to ensure that someone does not have any violent history before matching them and lining them for a date. It would ensure there is a lot more trust from the start which would likely increase engagement on these services.

A person’s real identity does not have to be shown to all users, but it would be a useful filter for a service. Perhaps a real ID is only required to join some groups and use certain aspects of a service. Once a user does link their real ID with the DMV it would make many services available to them. They can decide to only show their first name or a nickname in a dating app while services can make sure that users cannot easily create multiple fake accounts to abuse the service.

With troll farms being so common, it would be much better to limit what they can do if they have not attached a real ID to their account. For product reviews, I’d rather only allow users who have purchased the product and have a real ID to submit a rating and review.

When it comes to misinformation, having a real ID associated with an account which is often reported as posting misinformation would make it much easier to block them from posting across many groups. After a period of time they could have their access restored, but would have to be careful to not break any rules or violate community standards.

Structured Content

Finally, one way to really level up social media would be to provide tools which allow for annotating more information to what is posted on a service. Take an example of a video which shows a crazy neighbor terrorizing their community. We’ve all seen this kind of video. It would be helpful to be able to annotate the video as more details are available. Were the police contacted? Was there legal action? What were the consequences? Has a lawyer posted their analysis of the incident? Did the local news run a story on it?

Instead of just being left with a short video with very limited context, it would be better to learn more about what happened and the outcome from it. Right now we can like a post on TikTok of one of these videos and perhaps later see a Part 2, Part 3 and so on. By creating a structure which supports adding more to the story we all can be better informed.

This type of content would work against misinformation. Instead of taking down content and suspending users, we could instead have the community contribute to a post with more than just comments. If this incident happened in Austin, TX, then let local accounts attach more details to the post. The police chief, mayor, local lawyer or anyone else who has more credibility than some random other user who simply writes, “I heard he was arrested” and not even provide a link with more details.

Instead of doom scrolling and seeing a lot of terrible things that happened it would be a much better experience to see that when something bad happens that there was some positive outcome or at least see there were consequences for the bad behavior. I’ve seen so many videos of people taken off a plane or arrested inside an airport, but I don’t know what happened next. I can try to Google it to get more details, but I’d rather crowdsource the details and make that post sharable to other social media services.

When it comes to purposely misleading posts, it would be possible to directly associate other posts which debunk it. We don’t need the company running the service to put a warning on a post if community members area able to debunk it themselves. Instead of blaming the billionaire CEO, it is on the community itself to set their own standards and enforce them. Some groups may have a higher tolerance for some topics while others do not. That is alright. Let people choose. And just like a physical town square, this online town square should not tolerate trouble makers who are only there to harm the community. The community should be empowered to protect itself.

Conclusion

This combination of features would make for a much better user experience than what we are getting from Facebook, Twitter, TikTok and other services today. Perhaps these features could be worked into these services. People often talk about Construction TikTok or Fashion TikTok. Why not formally support groups that the users can created and become admins and assign moderators from the community? Facebook could start allowing groups to start making money by letting members have exclusive access to content and features in the group. Twitter has started a Communities features like is sort of like groups but it has not really taken off. The service is so dedicated to the timeline feature that it has not expanding into a great photos app or even a service like Medium which could leverage the current user base and create opportunities to be a part of the creator economy.

If new services are created, these problems should be considered and I hope they include these features.

--

--