• This topic has 76 replies, 47 voices, and was last updated 3 years ago by hugo.
Viewing 37 posts - 41 through 77 (of 77 total)
  • Anyone else hoping that social media is regulated to within an inch of its life❓
  • akeys001
    Full Member

    I think the regulations do need to make clearer what ‘moderation’ should involve – moderation by definition means avoiding the extremes – so at the extreme end of the scale of various viewpoints (i.e. provable lies) – these should be removed, but very soon after that (see cougar’s examples) someone somewhere (e.g. someone working for facebook etc) has to take a view and it’s not an easy task – I don’t envy any moderators job. To me the rules around moderation on a global scale are just not where they need to be (or perhaps in trump’s case having just been shut down on twitter) they are not enforced. Either way it’s human nature to push boundaries (a blessing and a curse) and social media is not a benign network whether we like it or not so ultimately more rules will likely need to be applied.

    austy
    Free Member

    Facebook, twitteet Al have a commercial interest in people being on it to earn advertising revenue.

    They keep people there by the use of the algorithm and the way it’s designed/programmed.

    This I what makes he echo Chambers and stops people seeing other sides of then coin.

    I think this is where the focus of attention needs to be.

    DezB
    Free Member

    If FB we faced with absolutely massive fines every time some one individual posted inciteful garbage, a way would be found to limit it pretty quickly

    Again, the moderators documentary will make you rethink this.
    Il’l see if I can find a link.

    MSP
    Full Member

    I think privacy laws could be the tool to improve social media. Currently whatever you enter on these platforms becomes the property of the platform, to analyze and use as they seem fit. This is how they target (political) marketing to nudge and twist users views. If they were not allowed to do that, and only randomized marketing was allowed, that brainwashing becomes far weaker.

    I also saw recently, whatsapp was going to start sharing and analyzing data with the wider facebook group. That is a game changer for me to stop using the app, but I don’t know if there are any more private alternatives.

    sadmadalan
    Full Member

    Before you start to regulate SM just remember the law of unintended consequences. If we regulate Twitter, FB, Parler, etc, then all internet forums would need to be regulated, since they are form of SM. Do you want to make Chips et al responsible for our moronic postings? People already complain when things are taken down, to make it workable, STW would need to approve every posting before it was published. STW would close.

    grum
    Free Member

    I find it amazing to still see people defending stuff like FB when they literally have algorithms that amplify and encourage toxic, damaging content because it generates controversy/clicks and makes them millions of dollars.

    We’ve never before had a medium that is so effective at feeding people content that plays upon their worst instincts and puts people into silos where those instincts can dominate unchecked. People with lots and lots of money and very bad intentions are well aware of this.

    Putting your head in the sand and saying ‘oh it’s just like the postal service’ is facile nonsense.

    That is a game changer for me to stop using the app, but I don’t know if there are any more private alternatives.

    Telegram claims to be that but I’ve never really looked into it.

    maccruiskeen
    Full Member

    I guess Sasha Barron Cohen’s speech is worth a re-listen

    I supposed the limitations or regulations you’d place on social media platforms to regulate their users would prevent exactly the conversation we’re having right now. What allows me to say something false as fact and for that to be shared, repeated, spread across various social media platforms is immediacy. I type something, post it, its out there. It may retrospectively be moderated, removed or whatever but the horse has already bolted.

    So – would we required that every post on Facebook or twitter has to be individually  vetted before its visible to the wider public? Would we required each post in this tread to be checked and cleared by the site owners before its published? Would someone at YouTube have to watch every video thats posted? I mean non of these bodies are public they are free to make their own rules as to what is or isn’t suitable content and doing that isn’t censorship

    But clearly thats impractical.

    Perhaps what is required though is that the voices on social media are verifiable – this would seem to be perticularly be an issue for twitter. Its though that half of the twitter traffic pertaining to Covid for instance is coming from bots.

    It seems a reasonable measure that a social media platform at least knows who their customers are – even takes basic steps to ensure that an account is actually someone rather than something. Creating fake profiles for yourself or even just automating the process of creating social media accounts appears to be trivially easy

    Freedom of speech isn’t freedom of responsibility but many platforms allow their users to be untraceable – beyond that  but how do you confer responsibly on a algorithm?

    prettygreenparrot
    Full Member

    It’s apparent that self-regulation has not been successful. Though why anyone would have expected it to be so I am unsure

    the current approaches to regulation from governments and companies are unlikely to be successful either. Have the various sanctions against MS (bundling of IE with windows for example) and google changed the dominance of Windows, office, or google search?

    I’d suggest the current state of things on FB and Twitter is a consequence of technology rather than social effects. However, the fix for that is unlikely to be more technology as it is not in the interests of these advertising companies to change things that much.

    I remember back in the days of usenet and listservs. It wasn’t that hard to find groups that acted like today’s ‘echo chambers’ (alt.Wesley.crusher.die.die.die?). What didn’t happen was the most popular or most searched conversations being pushed again and again. Stuff got lost and folks moved on. These days with things like YouTube’s nazi propaganda algorithm it’s what drives advertising and thus revenues that gets pushed. Not sure regulation will do much there as it will always lag behind technology.

    the biggest problem? I’d say it’s the move from the openness and freedom of the Internet and World Wide Web as it was near the turn of the century and the move to the advertising-driven, algorithm-based, walled-gardens of Facebook etc.

    how to fix it? In a while FB etc will disappear into the mists of time and be regarded as the irrelevancies they are.

    ayjaydoubleyou
    Full Member

    I think we need to see regulation of the users, not he platforms.
    (I’m aware that I am hypocritically posting here under a pseudonym).

    If you get to tweet as “trumper92” or facebook as “Jonno Truthscience” then you are far more likely to just spout off with no consequence. A sort of hit and un as you shout your opinion into the void, and then run away from the backlash it creates.
    Even if you occasionally head over to mumsnet for a laugh, all the juicy threads begin with “OMG I’ve name changed for this cos its totally outing”

    If you were forced to use your real name (as in, proof of ID to have an account) and were then held accountable to the same laws as if you said these things out loud in public in the country you are resident in then most (but not all) of the issues would go away.

    maccruiskeen
    Full Member

    the biggest problem? I’d say it’s the move from the openness and freedom of the Internet and World Wide Web as it was near the turn of the century and the move to the advertising-driven, algorithm-based, walled-gardens of Facebook etc.

    The difference between now and then is back then anyone and everyone was free to publish – that was the new openness and freedom. But there was little in the way of linking what you publish with people who seek to read it.

    If you compare it to conventional media – telly, or a magazine – expert eyes and ears were gathering. curating, presenting and editing and publishing content and packaging it in ways that meant an audience could find it. As audience you could choose your favourite channel, your favourite journal or whatever and knowledgeable content creators served the best bits to you

    The revolution of the old internet was that anyone could publish – but it meant it was full of self published dross and little way to reliably wade through it find good stuff.

    The companies that came to early prominence online were the ones who aggregated content – ebay put al the classified ads in one place for you, YouTube put all the cat videos in one place, Wikipedia put all the knowledge base in one place, there are sites that put all the particular flavour of funny stuff in one place for you and so on – and soon we’re almost back to having channels in the same way as we have TV channels. We don’t really ‘browse’ the internet any more unless we are seeking out a specific answer or product instead we mostly visit a handful of sites where either the site owner or a community of users create or curate  content to the consumer. The successful ones (and this is one of them) are ones that always have something new every time you look.

    All social media is mostly used for is allowing to user to curate  a bunch of content aggregators for themselves in one place. Most people post very little of their own thoughts or ideas, they mostly consume and share other peoples stuff, sometimes with a comment of their own, mostly without.

    BaronVonP7
    Free Member

    I’m with ayjaydoubleyou on this.

    For about 2 million years we’ve been evolving so that anything we communicate has a range of, what, about 30 meters? To an audience of maybe 30 people at most.

    Suddenly we have shiny new toys that can cast our “wisdom” around the planet to an audience of millions. Likewise, we get shit “wisdom” bombarding us from faceless actors, from well beyond visual range.

    Many of us are not intellectually or emotionally equipped enough to live in this environment; see the epidemic of conspiracy theorists, fact deniers, etc.

    Any online presence should be transparently and unequivocally based on your “in real life” identity.

    maccruiskeen
    Full Member

    For about 2 million years we’ve been evolving so that anything we communicate has a range of, what, about 30 meters? To an audience of maybe 30 people at most.

    Even in the paleolithiic era we had commonly held ideas that covered pretty much all of Europe.

    Any online presence should be transparently and unequivocally based on your “in real life” identity.

    Are you a real Baron? 🙂

    nickc
    Full Member

    Some things I think would help.

    1. Make it law that you have to write under your own name and make it explicit that comments are treated the same has other forms of “hate speech”  are.

    2. A user agreement that explicitly points out the penalties in plan language and completion of an learning course with test on the consequences of your actions, no test, no access (a driving licence for SM) and an annual fee for using it (VED equivalent)

    3. banning of advertising on all forms of SM

    BaronVonP7
    Free Member

    Are you a real Baron? 🙂

    Erm….

    Rumbled.

    mrmonkfinger
    Free Member

    Even in the paleolithiic era we had commonly held ideas that covered pretty much all of Europe.

    yeah, but, those ideas took, you know, forever to get from one side to the other.

    the biggest problem? I’d say it’s the move from the openness and freedom of the Internet and World Wide Web as it was near the turn of the century and the move to the advertising-driven, algorithm-based, walled-gardens of Facebook etc.

    as I said, it’s the money

    scotroutes
    Full Member

    3. banning of advertising on all forms of SM

    How long do you think STW would last under that rule?

    lamp
    Free Member

    They should be, but it’s a difficult one to decide what is freedom of expression and opinion and what is manipulation.

    On FB i’ve already seen 20 or so reposts about the kerfuffle in the states from people that could stir racial tensions again with comments such as ‘imagine if it was blacks who had done this’. I get their point completely (even though they fail to mention the 4 that had sadly died), but can’t help feel that this is the sort of inflammatory posts that we don’t need and they aren’t helpful.

    I find myself using it less and less because of this…..and all the Covid ‘experts’. I can see myself just moving away from it completely if maintains the same trajectory.

    mrmonkfinger
    Free Member

    maybe posts could go instantly to, well, almost nobody, except the closest of contacts
    an hour later to a wider circle
    only after a few hours to world + dog

    might slow the insta-hate-flame-die-die-die type responses

    keep any corporations off it
    keep politicos off it
    permaban anyone using it for advertising

    toby
    Full Member

    Any online presence should be transparently and unequivocally based on your “in real life” identity.

    Out of interest, do you expect every “Look at my shiny new bike” post to be tracible to the user’s real name and address for anyone who may want to?

    ayjaydoubleyou
    Full Member

    Out of interest, do you expect every “Look at my shiny new bike” post to be tracible to the user’s real name

    yes

    and address

    no

    for anyone who may want to?

    toby
    Full Member

    Surely that’s the worst of both worlds? Someone called “James Smith” has said something naughty on the Internet. Are you going to shun every James Smith you ever meet?

    On the other hand, someone called James Smith who’s posting history looks like he lives in Bristol has a new Pinarello. Scrote finds three James Smith’s on the electoral roll in Bristol; quick look on Streetview suggests only one lives in a red brick house that matches the picture. Evening’s thieving sorted.

    DezB
    Free Member

    Many of us are not intellectually or emotionally equipped enough to live in this environment

    Best thing I’ve read on STW in a while!
    (what does it mean? 😛 )

    hugo
    Free Member

    There is one thing that could clear up 99% of the nonsense overnight.

    Have an option that you toggle to say that you want to hear from “registered users” only. This would be Joe Bloggs from generalish XYX location (ie Clapham rather than London).

    In order to be a “registered user” you need to make a refundable £1 payment on a debit/credit in your name and address.

    You would then have a grown ups social media fenced-off area.

    Done.

    As to this argument….

    On the other hand, someone called James Smith who’s posting history looks like he lives in Bristol has a new Pinarello. Scrote finds three James Smith’s on the electoral roll in Bristol; quick look on Streetview suggests only one lives in a red brick house that matches the picture. Evening’s thieving sorted.

    If you feel the need to show off 10 grands worth of bike publicly then feel free but you should best do it on an anonymous account. Nothing stopping you, you have that option. Many would filter you out on a Facebook or Twitter for example. I probably wouldn’t on STW as it’s a friendly place generally and I’d have a shufty at the bike.

    mrmonkfinger
    Free Member

    You would then have a grown ups social media fenced-off area.

    I’d hazard the suggestion that grown-ups are not the problem. Look at, for instance, the calibre of this place. It’s largely inhabited by grown-ups. There are places we can find already, that are a cut above Facewittertok. It’s the children in adult bodies causing the issues.

    TheBrick
    Free Member

    I also saw recently, whatsapp was going to start sharing and analyzing data with the wider facebook group. That is a game changer for me to stop using the app, but I don’t know if there are any more private alternatives

    Signal is usually held up as best practice

    maccruiskeen
    Full Member

    I’ve often wondered whether social media profiles need a sort of feedback rating the same way somewhere like eBay has.

    The spread of misinformation is only partly down to a number of bad actors who are creating it. If a Russian troll farm generates an explosive or defamatory post about some another new bottom bracket standard which has a camera in it that allow Bill Gates to look up my trouser leg theres no way of me seeing that post. I don’t have any Russian bots on my friends list.

    There is apparently a roughly two week timeline from post being created to it being reposted variously by bots in comments to post, seen there by gullible people who have an appetite to scandal who repost it, then your mum reposts it because she reposts lost dogs photos from foreign countries then, some cod news source that doesn’t do very careful journalism posts it, then another news careless  source reposts it because now its ‘news’, then Brietbart, someone on fox reads breitbart and expresses and opinion because ‘what people are saying’ and now because people are saying it it really is ‘news’ and and of course a lot of the material shared on social media is of course links to new sources.

    A big part of that chain isn’t people writing or posting anything it’s  just lots of ‘liking’ and ‘sharing’ of material from third parties that rapidly becomes detached from its author. Its left to the wider community around  those people to moderate that. But the only tools available is to either just block that person so they don’t see any of their crap or to report the post.

    Neither of those actions result in the falsehood being corrected-  they just make it vanish, either to you, or for everyone who saw it from the same source as you – but everything true or false on social media just falls out of sight in a few hours anyway – so in practice theres no difference between a truth or a lie.

    Theres not currently any mechanism to encourage people to be more responsible for what they propagate – whether its spread false reporting, scams, hacks, hoaxes, viruses, pyramid schemes or whatever.

    But if you had a rating as an account holder – if posts the you’d shared had proven to be falsehoods, urban myths, social engineering scams – if just carelessly clicking ‘like’ and ‘share’  caused you repetitional harm – or the role you played in preventing the spread of this falsehoods were of reputation benefit then there would be a useful pressure for people to be more discerning about what they read and more scrupulous about what they pass on to others. You’d also have a motivation to push back if you’ve unadvertantly fallen for something  – improve your reputation once you learn you’ve made a gaff and tracking back / notifying / reporting back along the chain it came to on

    It would then give you a more useful filter – you could maintain useful connections with Mad Uncle Janet who you love but you’ve gotten  bored of being tagged in photos about discount Oakleys. You could  maintain access to his written content the family family and friend correspondence that matters,  but filter out his gullible likes and shares and on wider media platforms like twitter you could set a bar for reliability for any posts that make it into your feed

    ++++++ONLY 10% OF MY FRIENDS WILL COPY AND PASTE THIS INTO THE STATUS++++++++ TYPE THE  MUM’S CAT’S MAIDEN NAME INTO THE COMMENTS AND SEE WHAT HAPPENS YOU”LL BE AMAZED ++++++++++

    mrmonkfinger
    Free Member

    if just carelessly clicking ‘like’ and ‘share’ caused you repetitional harm

    Basically, if you ‘like’ nazis*, then that doesn’t currently count against you. It should.

    And if you subsequently ‘unlike’ nazis*, then maybe that should count for you (well, maybe not quite as much as never ‘liking’ them in the first place).

    * obviously evil example is obviously evil. Therein lieth the crux – what things shall be forbidden to ‘like’ and thus lose reputation on?

    yetidave
    Free Member

    Facebook and the like create bubbles of like minded people

    yup, this is why my feed is full of MTB, road cycling, sailing (Americas cup currently) and cricket stuff. Never see anything that my family puts on unless i specifically go to look for it…its great.

    Garry_Lager
    Full Member

    Any online presence should be transparently and unequivocally based on your “in real life” identity.

    That was the premise of Dave Eggers dystopian novel The Circle, about what might happen if Google and Facebook keep up all their good work. Secrets are lies, sharing is caring, and privacy is theft. Didn’t really end well tbh.
    The main character goes ‘transparent’, as it happens. No filter. See everything. Always, because all that happens must be known.

    Cougar
    Full Member

    I also saw recently, whatsapp was going to start sharing and analyzing data with the wider facebook group. That is a game changer for me to stop using the app,

    WhatsApp uses end-to-end encryption, this scenario is not possible. Well, not on any sort of scale anyway.

    For about 2 million years we’ve been evolving so that anything we communicate has a range of, what, about 30 meters? To an audience of maybe 30 people at most.

    You’re going to shit yourself when you hear about religion. And television.

    I’d hazard the suggestion that grown-ups are not the problem. Look at, for instance, the calibre of this place. It’s largely inhabited by grown-ups.

    Largely because of the magnificent moderation and herculean quality control exercised by ruggedly handsome volunteer moderators.

    Cougar
    Full Member

    I’ve often wondered whether social media profiles need a sort of feedback rating the same way somewhere like eBay has.

    This is a brilliant idea in theory, but it hinges on the reliability of its peers. There’s a feedback loop here where suddenly “quality” == “people who I agree with,” we all seek to silence people with differing views from our own. Would you trust that detail to the great unwashed? Your “gullible people who have an appetite to scandal” are just as able to upvote a comment as an “expert” may dismiss it. This would surely be weaponised overnight, we’re giving (alleged) Russian bots another tool to legitimise propaganda. You’d need some sort of web / chain of trust to validate people who are, uh, validating things and even then that’s open to abuse. Bad actors can validate other bad actors, or conversely we’re setting up elitism for those allowed to have a say. How much is Tory party membership again? However you slice it it’s turtles all the way down.

    It’d possibly work on STW (where’s that ‘like’ button?) but on Facebook et al I’m less convinced. If regular people merited that much credit then that wouldn’t be an issue in the first place.

    chestercopperpot
    Free Member

    Anyone else hoping that social media is regulated to within an inch of its life❓

    Nope. Educating to within an inch of their lives maybe. It’s already happening. My son was educated on bias and other manipulative practices at a publicly funded high school.

    I think the current mess will run for about 20 more years. The generations coming up aren’t going to accept or be taken in by the same things. That’s not to say they won’t be taken in!

    bensales
    Free Member

    I think the regulations do need to make clearer what ‘moderation’ should involve – moderation by definition means avoiding the extremes – so at the extreme end of the scale of various viewpoints (i.e. provable lies) – these should be removed

    Who defines what the extremes are? Because it’s not going to be who you think it is, it’s going to be the people in power.

    For example, in Saudi Arabia that would be Crown Prince, and the extremes would be anything critical of him. And this leads to things like the state sanctioned murder of a journalist.

    In America the extremes wouldn’t be defined by left leaning liberals, they be defined by old white men, as they would in the UK.

    Joe Rogan’s recent conversation with Ira Glasser is a brilliant exploration of this topic.

    sockpuppet
    Full Member

    WhatsApp uses end-to-end encryption, this scenario is not possible. Well, not on any sort of scale anyway.

    I understand it obscures content, but can they not see who posts to who & when. And build networks & patterns from there? And see the lists of members of groups ( but not the posts?)

    Spin
    Free Member

    I was always very much of the ‘it’s the users not the platform’ school of thought. However, my wife is a deputy head at a large secondary school and a huge amount of her (and others) time is taken up dealing with social media issues. Some are just spats that would have happened in the real world anyway but many others are serious child protection or even criminal issues that simply wouldn’t have existed pre SM. She’s become an advocate of serious control of SM for kids and it’s influenced her personal use of it too.

    To those upthread saying education is the answer, I’m not convinced. It will work for some but what SM has done in many instances is shift the influences in a child’s life from the hopefully positive (yes, not always I know) ones of family and school to online communities with less or no accountability than those traditional structures. That’s why we need regulation of some sort.

    hugo
    Free Member

    Simple answer, make the platform responsible for everything on it.

    If FB we faced with absolutely massive fines every time some one individual posted inciteful garbage, a way would be found to limit it pretty quickly.

    Not sure how workable that all is but its my starter for ten.

    The thing is, the Internet is considered such a powerful force and that it can’t possibly be regulated.

    Funny that companies like Facebook don’t mind using this huge force to create huge profits.

    Yes, it would cost a lot, but that’s because the rewards are huge.

    It’s a bit like complaining that you’ll have to spend a lot on safety if you’re an oil company pumping £billions out of the ground. Yes, you do have to, that’s how it works. If you fail? Then you you have a Deepwater Horizon and it’ll cost you.

    With great power comes the potential for great profits but should also come with great responsibility.

    If YouTube, for example, were actually cracked down on and they were switched off until they sorted out their troubling content, then the impossible would happen overnight.

    We’re being mugged off.

Viewing 37 posts - 41 through 77 (of 77 total)

The topic ‘Anyone else hoping that social media is regulated to within an inch of its life❓’ is closed to new replies.