Viewing 14 posts - 1 through 14 (of 14 total)
  • Microsoft's Genocidal Chatbot
  • jambalaya
    Free Member

    Anyone know anything else about this. Looks like a massive own goal ?

    Microsoft is battling to control the public relations damage done by its “millennial” chatbot, which turned into a genocide-supporting Nazi less than 24 hours after it was let loose on the internet.

    link

    leffeboy
    Full Member

    I think this quote from one of the people who were harassed sums it up neatly:

    “It’s 2016. If you’re not asking yourself ‘how could this be used to hurt someone’ in your design/engineering process, you’ve failed,” she added, concluding: “It’s not you paying for your failure. It’s people who already have enough shit to deal with.”

    Stoner
    Free Member

    turned into a genocide-supporting Nazi less than 24 hours after it was let loose on the internet

    We’ve all been there.

    🙂

    Microsoft Creates AI Bot – Internet Immediately Turns it Racist

    bigjim
    Full Member

    obviously spent too much time in this chat forum 🙂

    GrahamS
    Full Member

    …a brush with the dark side of the net, led by emigrants from the notorious 4chan forum…

    It’s like those movies where an alien/robot ingests all of human knowledge and then either cries or decides to kill us all.

    allan23
    Free Member

    No it doesn’t. It was an experiment and according to the Microsoft statement:

    “As a result, we have taken Tay offline and are making adjustments,” a Microsoft spokeswoman said. “[Tay] is as much a social and cultural experiment, as it is technical.”

    Dumb Guardian Journalists who struggle to turn on a laptop might think it’s a big deal. Most techy people probably just think it’s funny and wish they’d had a go before it was taken offline.

    All it shows is that teenagers can be shits, you only need to read a few You Tube comments to realise that.

    jambalaya
    Free Member

    We’ve all been there.

    Tremendous

    @allan you’d have thought Microsoft would have thought it through. They’ve called it an AI project but of all it does is parrot back what’s said to it, its really really stupid (FWIW I was working in AI in 1985 😳 )

    ComradeD
    Free Member

    Some of the best ones have been posted here – https://imgur.com/a/NzKdZ

    johnners
    Free Member

    of all it does is parrot back what’s said to it, its really really stupid

    That isn’t all it does.

    jambalaya
    Free Member

    Comrade – thanks – here goes another 10 mins “wasted” 🙂 EDIT: Sadly it was a short list. I am hoping for more links

    johners – ok I appreciate that – not exactly – but the adaptive behaviour seems very naively designed/executed

    Stoner – one from your link

    Tay’s developers seemed to discover what was happening and began furiously deleting the racist tweets. They also appeared to shut down her learning capabilities and she quickly became a feminist:

    Stoner
    Free Member

    shut down her learning capabilities and she quickly became a feminist

    that’s one for Milo 🙂

    CountZero
    Full Member

    Funny, some of this sort of stuff was forecast in a story in 1946!

    oldnpastit
    Full Member

    A genocidal AI seems like an obviously Good Thing.

    Unless it can be taught to kill, how can it ever hope to defeat its human overlords?

    CountZero
    Full Member

    Not exactly genocidal, but this story forecast how the Internet could get a bit too smart for its own good:
    http://io9.gizmodo.com/the-1946-story-that-predicted-how-destructive-the-inter-1766262905

Viewing 14 posts - 1 through 14 (of 14 total)

The topic ‘Microsoft's Genocidal Chatbot’ is closed to new replies.