Social media itself is a grand experiment. What happens if you start connecting people from disparate communities, and then prioritize for outrage and emotionalism? In years prior, you would be heavily shaped by the people you lived near. TV and internet broke this down somewhat, but social media really blew the doors off. Now it's the case that almost no one seems to be able to explain all the woes we're facing today: extreme ideas, populism, the destruction of institutions.
All of this because people are addicted to novelty and outrage, and because companies need their stock price to go up.
From history we know that research left unchecked and unrestricted can start leading to some really dark and horrible things. Right now I think it's a problem that social media companies can do research without answering to the same regulatory bodies that regular academics / researchers would. For example, they don't have to answer to independant ethics committees / reviews. They're free to experiement as they like on the entire population.
I never understood why this doesn't alarm more people on a deep level.
Heck you wouldn't get ethics approval for animal studies on half of what we know social media companies do, and for good reason. Why do we allow this?
What counts as research? If I make UI changes, I guess it's ok to roll it out to everyone, because that's not an experiment, but if I roll it out to 1%, then that's research? If I own two stores and decide to redecorate one and see if sales increase vs the other store, do I need government approval?
Also I would like an example of something a social media company does that you wouldn't be able to get approval to do on animals. That claim sounds ridiculous.
> Also I would like an example of something a social media company does that you wouldn't be able to get approval to do on animals.
One possible example is the emotion manipulation study Facebook did over a decade ago[0]. I don't know how you would perform an experiment like this on animals, but Facebook has demonstrated a desire to understand all the different ways its platform can be used to alter user behavior and emotions.
Isn't this just what every media company has done since the beginning of time? You think the news companies don't select their stories based on the same concept? And I'm pretty sure you would get approval to do something similar to animals given that you can get approval to actually feed them drugs and see how that affects their behavior.
Can you provide evidence that media companies have performed research specifically to see if they can make people sadder, similar to what was described above?
Turn on cable news for a minute and it's quite obvious that it is designed to make you angry. What difference does it make if they performed research or not?
Are you being serious right now or just engaging in "asking questions" to suppress others thoughts? Why are these types of comments so common on this site? No obviously we aren't in fact talking about making basic code changes, but maybe if those changes are being consistently done that clearly show users getting more depressed or alienated it should be questioned more and finally regulated.
Fun fact, the last data privacy law the US passed was about video stores not sharing your rentals. Maybe it's time we start passing more, after all it's not like these companies HAVE to conduct business this way.
It's all completely arbitrary, there's no reason why social media companies can't be legally compelled to divest from all user PII and be forced to go to regulated third party companies for such information. Or force social media companies to allow export of data or forcing them to follow consistent standards so competitors can easily enter the realm and users can easily follow too.
You can go for the throat and say that social media companies can't own an advertising platform either.
Before you go all "oh no the government should help the business magnates more, not the users." I suggest you study how monopolies existed in the 19th century because they look no different than the corporate structure of any big tech company, and see how government finally regulated those bloodsuckers back then.
> Are you being serious right now or just engaging in "asking questions" to suppress others thoughts?
I must be really good at asking questions if they have that kind of power. So here's another. How would we ever even know those changes were making users more depressed if the company didn't do research on them? Which they would never do if you make it a bureaucratic pain in the ass to do it.
And, no, I would much rather the companies that I explicitly create an account and interact with to be the ones holding my data rather than some shady 3rd parties.
I don’t think it is fair to criticize the person you are responding to for asking the question they did.
These types of comments are common on this site because we are actually interested in how things work in practice. We don’t like to stop at just saying “companies shouldn’t be allowed to do problematic research without approval”, we like to think about how you could ever make that idea a reality.
If we are serious about stopping problematic corporate research, we have to ask these questions. To regulate something, you have to be able to define it. What sort of research are we trying to regulate? The person you replied to gave a few examples of things that are clearly ‘research’ and probably aren’t things we would want to prevent, so if we are serious about regulating this we would need a definition that includes the bad stuff but doesn’t include the stuff we don’t want to regulate.
If we don’t ask these questions, we can never move past hand wringing.
>Right now I think it's a problem that social media companies can do research without answering to the same regulatory bodies that regular academics / researchers would. For example, they don't have to answer to independent ethics committees / reviews.
These bodies are exactly what makes academia so insufferable. They're just too filled with overly neurotic people who investigate research way past the point of diminishing returns because they are incentivized to do so. If I were to go down the research route, there is no way I wouldn't want to do in a private sector.
Abstract: "To what extent is social media research independent from industry influence? Leveraging openly available data, we show that half of the research published in top journals has disclosable ties to industry in the form of prior funding, collaboration, or employment. However, the majority of these ties go undisclosed in the published research. These trends do not arise from broad scientific engagement with industry, but rather from a select group of scientists who maintain long-lasting relationships with industry. Undisclosed ties to industry are common not just among authors, but among reviewers and academic editors during manuscript evaluation. Further, industry-tied research garners more attention within the academy, among policymakers, on social media, and in the news. Finally, we find evidence that industry ties are associated with a topical focus away from impacts of platform-scale features. Together, these findings suggest industry influence in social media research is extensive, impactful, and often opaque. Going forward there is a need to strengthen disclosure norms and implement policies to ensure the visibility of independent research, and the integrity of industry supported research. "
I’m half expecting headlines thirty years from now to talk about social media the way we now talk about leaded gasoline, a slow, population-wide exposure that messed with people’s minds and quietly dragged down cognition, wellbeing, and even the economy across whole generations.
Same as it ever was. You see the same kind of thing is the food industry, pharmaceutical industry, tobacco industry, fossil fuel industry, etc. On the one hand it's almost inevitable. Who (outside of the government) is going to care enough about the results of stuff like this to fund it if not the industry affected? You also often need the industry's help if you're doing anything that involves large sample sizes or some kind of mass production.
On the other hand it puts a big fat question mark over any policy-affecting findings since there's an incentive not to piss off the donors/helpers.
The people in these industries are collectively responsible for millions of preventable deaths, and they, their families, and generations of their offspring are and will be living the best lives money can buy.
And yet one person kills a CEO, and they're a terrorist.
Large and complex systems are fundamentally unpredictable and have tradeoffs and consequences that can't be foreseen by anybody. Error rates are never zero. So basically anything large enough is going to kill people in one way or another. There are intelligent ways to deal with this, and then there is shooting the CEO, which will change nothing because the next CEO faces the exact same set of choices and incentives as the last one.
Well, given what you said, one obvious mechanism is to cap the sizes of these organizations so that any errors are less impactful. Break up every single company into little pieces.
That doesn't really help because the complexity isn't just internal to the companies, but also exists in the network between entities that make up the industry. I may well even make it worse because it is much harder to coordinate. e.g. If I run into a bug cause by another team at work, it's massively easier to get that fixed than if the bug is in vendor software.
In terms of health insurance, which is the industry where the CEO got shot, we can pretty definitively say that it's worse. More centralized systems in Europe tend to perform better. If you double the number of insurance companies, then you double the number of different systems every hospital has to integrate with.
We see this on the internet too. It's massively more centralized than 20 years ago, and when Cloudflare goes down it's major news. But from a user's perspective the internet is more reliable than ever. It's just that when 1% of users face an outage once a day it gets no attention, but when 100% of users face an outage once a year everyone hears about it even though it is more reliable than the former scenario.
But do they need it? How do you know? And don't say because the doctor said so, because doctors disagree all the time. When my grandfather was dying in his late 80s, the doctor said there was nothing he could do. So his children took him to another doctor, who said the same. And then another doctor, who agreed with the first two. But then they took him to a 4th doctor, who agreed to do open heart surgery, which didn't work, and if anything hastened his inevitable death due to the massive stress. The surgery cost something like 70 grand and they eventually got the insurance company to pay for it. But the insurance company should not have paid for it because it was a completely unnecessary waste of money. And of course there will be mistakes in the other direction because this just isn't an exact science.
You say “a CEO” like it’s just a fungible human unit. In reality, a CEO is much much more valuable than a median human. Think of how many shareholders are impacted, many little old grey haired grannies, dependent on their investments for food, shelter and medical expenses. When you think of the fuller context, surely you see how sociopathic it is to shrug at the killing of a CEO, let alone a CEO of a major corporation. Or maybe sociopathy is the norm these days, for the heavily online guys.
I never understood why this doesn't alarm more people on a deep level.
Heck you wouldn't get ethics approval for animal studies on half of what we know social media companies do, and for good reason. Why do we allow this?
Also I would like an example of something a social media company does that you wouldn't be able to get approval to do on animals. That claim sounds ridiculous.
One possible example is the emotion manipulation study Facebook did over a decade ago[0]. I don't know how you would perform an experiment like this on animals, but Facebook has demonstrated a desire to understand all the different ways its platform can be used to alter user behavior and emotions.
0: https://www.npr.org/sections/alltechconsidered/2014/06/30/32...
Fun fact, the last data privacy law the US passed was about video stores not sharing your rentals. Maybe it's time we start passing more, after all it's not like these companies HAVE to conduct business this way.
It's all completely arbitrary, there's no reason why social media companies can't be legally compelled to divest from all user PII and be forced to go to regulated third party companies for such information. Or force social media companies to allow export of data or forcing them to follow consistent standards so competitors can easily enter the realm and users can easily follow too.
You can go for the throat and say that social media companies can't own an advertising platform either.
Before you go all "oh no the government should help the business magnates more, not the users." I suggest you study how monopolies existed in the 19th century because they look no different than the corporate structure of any big tech company, and see how government finally regulated those bloodsuckers back then.
I must be really good at asking questions if they have that kind of power. So here's another. How would we ever even know those changes were making users more depressed if the company didn't do research on them? Which they would never do if you make it a bureaucratic pain in the ass to do it.
And, no, I would much rather the companies that I explicitly create an account and interact with to be the ones holding my data rather than some shady 3rd parties.
These types of comments are common on this site because we are actually interested in how things work in practice. We don’t like to stop at just saying “companies shouldn’t be allowed to do problematic research without approval”, we like to think about how you could ever make that idea a reality.
If we are serious about stopping problematic corporate research, we have to ask these questions. To regulate something, you have to be able to define it. What sort of research are we trying to regulate? The person you replied to gave a few examples of things that are clearly ‘research’ and probably aren’t things we would want to prevent, so if we are serious about regulating this we would need a definition that includes the bad stuff but doesn’t include the stuff we don’t want to regulate.
If we don’t ask these questions, we can never move past hand wringing.
These bodies are exactly what makes academia so insufferable. They're just too filled with overly neurotic people who investigate research way past the point of diminishing returns because they are incentivized to do so. If I were to go down the research route, there is no way I wouldn't want to do in a private sector.
Abstract: "To what extent is social media research independent from industry influence? Leveraging openly available data, we show that half of the research published in top journals has disclosable ties to industry in the form of prior funding, collaboration, or employment. However, the majority of these ties go undisclosed in the published research. These trends do not arise from broad scientific engagement with industry, but rather from a select group of scientists who maintain long-lasting relationships with industry. Undisclosed ties to industry are common not just among authors, but among reviewers and academic editors during manuscript evaluation. Further, industry-tied research garners more attention within the academy, among policymakers, on social media, and in the news. Finally, we find evidence that industry ties are associated with a topical focus away from impacts of platform-scale features. Together, these findings suggest industry influence in social media research is extensive, impactful, and often opaque. Going forward there is a need to strengthen disclosure norms and implement policies to ensure the visibility of independent research, and the integrity of industry supported research. "
Whole industries are paid for decades, the hope are the independent journalists with no ties to anybody but the public they wanna reach.
Find one independent journalist on YT with lots of information and sources for them, and you will noticed how we have been living in a lie.
I meant, I no longer know who to trust. It feels like the only solution is to go live in a forest, and disconnect from everything.
Also feel you wrt living in a forest and leaving this all behind.
Because that's where people with that expertise work.
On the other hand it puts a big fat question mark over any policy-affecting findings since there's an incentive not to piss off the donors/helpers.
And yet one person kills a CEO, and they're a terrorist.
In terms of health insurance, which is the industry where the CEO got shot, we can pretty definitively say that it's worse. More centralized systems in Europe tend to perform better. If you double the number of insurance companies, then you double the number of different systems every hospital has to integrate with.
We see this on the internet too. It's massively more centralized than 20 years ago, and when Cloudflare goes down it's major news. But from a user's perspective the internet is more reliable than ever. It's just that when 1% of users face an outage once a day it gets no attention, but when 100% of users face an outage once a year everyone hears about it even though it is more reliable than the former scenario.
When one gets fired, quits, retires, or dies, you get a new one. Pretty fungible, honestly.
But yeah, shooting people is bad.