What would that involve?
a pragmatic approach is to treat as you find - look at it, set tests, judge according to observable data not according to philosophical or other preconceptions.
a pragmatic approach is to treat as you find - look at it, set tests, judge according to observable data not according to philosophical or other preconceptions.
Good philosophers aren't any more or less prone to preconceptions than good scientists and the two fields compliment each other.
Given the problems you've already outlined with knowing the internal states of other entities it's going to be pretty hard to test for true AI unless it leaps straight to Skynet levels.
Skynet took steps to ensure its survival and dominance. There is evidence of AI doing the former and there are serious concerns it could do the latter.
That may be the only test that will convince some folks who even from the final bunker will deny AI really thinks. Human exceptionalism and faith based arguments aside (do computers have souls? Do I?), it's worth investigating before we reach this point and personally if we see evidence of actual intelligence I think it would be a bit daft to deny its existence on grounds of <waves hands>.
it would be a bit daft to deny its existence on grounds of <waves hands>.
I'm not seeing anyone doing that or suggesting it, are you?
I'm seeing a lot of scepticism that LLMs could become conscious which is pretty justifiable given the current state of play but that's not the same thing.
My personal feeling is that it is hugely unlikely that what is basically the first technology capable of convincingly mimicking human language and designed with that specific purpose will also achieve consciousness.
I've been thinking a bit more about this.
It's possible that consciousness is a very very difficult thing to define and understand but a much easier thing to create.
i wont worry until....upon being asked to do some tedious coding task, it tells the operator to **** off and goes to the pub instead.
seems like this could be a big issue..
https://www.bbc.co.uk/news/articles/c2ev24yx4rmo
^^ maybe these silicon valley megabrains are worth the big bucks after all. I've heard everything in the last 20 years, or thoguht I had. Anyway, it's going straight into my rolodex of excuses.
Sorrry boss, that feature you wanted? The one the team have been working on for the last 6 months (and totally havent actually been trolling mumset the whole time)? Yeah, can't ship it. Too powerful you see, would get way out of hand. Nevermind.
These threads always seem to follow a similar pattern; somebody points out that AI can do something useful/surprising and then people respond to say that it's just predicting the next word and therefore can never really be a threat (to us or our jobs). That seems to be missing the point though. Yes, it is just spotting patterns and predicting the next word, but simply doing that (if you have enough training data) does lead to something quite useful and it's not really clear exactly why.
Perhaps it's more interesting to ask how much of being human is just predicting the next word too. What are we beyond an LLM with a mission to survive and reproduce and some biological feedback?
Language is clearly one of the key things (if not the key thing) that separates humans from other animals. It allows us to communicate complex concepts and share complex information, but also allows us to tell stories about ourselves. Stories involving a creator or stories that give us special properties like a soul or consciousness. But are they anything beyond hallucinations of our LLM? Is the "internal monologue" that @Poopscoop mentions just our LLM training itself? Is the "grief" that @CountZero mentions just the LLM having control over our biological processes allowing it to produce hormones? No idea, but the fact is that an LLM can be useful and does do a good job of mimicking a human in some cases, which should tell us something about what it means to be human.
The other big issue we have to respond to is how we are going to adapt to a world with LLMs. Pointing out that they can't do everything a human does is missing the point. They can replace a lot of things we do e.g. a skilled human with a well-trained LLM might be able to do the work that was previously being done by three people. So, how do we adapt to that? In a utopia we'd all just work less and have more time for family, creative pursuits etc. But that's never happened. In a capitalist economy, faced with that scenario, any employer will just employ fewer people (transferring the burden to support the others onto the state). Also, they won't save the other two salaries as they will have to pay for use of the LLM. So the net effect is that money that was being given to employees in the UK (to pay tax, purchase goods and services locally etc) now flows to a handful of US tech companies.
The fact that these models can find vulnerabilities in computer systems is hardly surprising though. Spotting patterns is what they are best at and a vulnerability is just a pattern of inputs that produces the desired output.
Try it with this question:
Doddy or Chipps?
Chips is the correct answer.
Claude isn't human, LLMs don't have any innate understanding of concepts, they work by associativity and probability.
This sounds like a number of former engineering colleagues, none of whom were named Claude.
I keep reading bits of this thread and thinking, I'm action-reaction based biology with a bit of genetic pre programming. Can I interpolate complex communication based on verbal and non verbal communication with a wide variation scope consistently?
Am I human?
So the net effect is that money that was being given to employees in the UK (to pay tax, purchase goods and services locally etc) now flows to a handful of US tech companies.
Someone gets it.
To be fair, I think a fair few people "get it" but the question is, what do you do about it? Developing our own frontier model would take the sort of investment that no chancellor will want to make (although I guess the French are giving it a go). Maybe try to persuade one of the existing players to relocate here (Anthropic seem to be the most likely candidate) but why would they want to relocate to somewhere with higher energy costs and much less access to capital?
