The slightly creepy advances of AI

Irwell

Well-Known Member
Joined
12 Sep 2008
Messages
2,565
Location
Lower Broughton
Whilst using my Google phone I dismissed a notification to rate the shop I was standing in and at that moment I got a notification from the Google Assistant about an article on Google News that I might like. The article was about the new version of Google Assistant, and it is pretty impressive...

 
That's quite nifty... you know the first thing everyone is going to do is spend ages trying to trip it up rather than actually using it though.
 
Some of the enhancements are so impressive. Little things like inflection and utterances like 'er' make it sound so much more natural, and some of the bigger things like negotiating times and dealing with idiots who think 7pm means 7 people is really impressive. Apparently if the AI can't come to a definitive result it will pass it on to a human at Google for them to complete so that they can then use their own responses to improve the AI further.
 
Some of the enhancements are so impressive. Little things like inflection and utterances like 'er' make it sound so much more natural, and some of the bigger things like negotiating times and dealing with idiots who think 7pm means 7 people is really impressive. Apparently if the AI can't come to a definitive result it will pass it on to a human at Google for them to complete so that they can then use their own responses to improve the AI further.
Last I heard Google hadn't gotten this to a stage where they were willing to do a live demo, so I'd be unsure how far along it actually is.

I'd also consider it a UI and not an AI as it's working from preprogrammed responses.
 
Last I heard Google hadn't gotten this to a stage where they were willing to do a live demo, so I'd be unsure how far along it actually is.
It's apparantly coming very soon. They have thousands of simulated conversations it has run.

I'd also consider it a UI and not an AI as it's working from preprogrammed responses.
It's still AI if a neural net is making decisions around appropriate responses. Apparently it even takes the tone of the replies into account.
 
It's apparantly coming very soon. They have thousands of simulated conversations it has run.


It's still AI if a neural net is making decisions around appropriate responses. Apparently it even takes the tone of the replies into account.
It's only an AI if it's self aware. A UI is perfectly capable of selecting an appropriate option from a list of responses using preset parameters, what it's not capable of is creating it's own responses when faced with a situation it has no experience of, hence it defaulting back to a human when necessary.

It's coming very soon to a small group of beta testers and there's a huge difference between simulated conversations and a live demo in uncontrolled conditions.
 
It's only an AI if it's self aware. A UI is perfectly capable of selecting an appropriate option from a list of responses using preset parameters, what it's not capable of is creating it's own responses when faced with a situation it has no experience of, hence it defaulting back to a human when necessary.
Your definition of AI is clearly different to the rest of the industry then. You are one of the few who thinks AI requires self awareness. AI as generally referred to is applied machine learning and little more.

It's coming very soon to a small group of beta testers and there's a huge difference between simulated conversations and a live demo in uncontrolled conditions.
Does that make it any less impressive? Their AI held a discussion with someone and it managed to understand it didn't need to make a table reservation despite the fact they misunderstood what it was saying a couple of times and had a confusing accent.
 
Your definition of AI is clearly different to the rest of the industry then. You are one of the few who thinks AI requires self awareness. AI as generally referred to is applied machine learning and little more.
Machine Learning is one small subset of AI research. Speech recognition is far from the be all and end all of human like behaviour and intelligence.

Does that make it any less impressive? Their AI held a discussion with someone and it managed to understand it didn't need to make a table reservation despite the fact they misunderstood what it was saying a couple of times and had a confusing accent.
It's a slight improvement on the likes of Siri, Alexis and Assistant 1 then.
As I said, why haven't they been confident enough to do a live demo?
 
Machine Learning is one small subset of AI research. Speech recognition is far from the be all and end all of human like behaviour and intelligence.
Almost all of which are varying levels of applied machine learning. You accept that the industry considers these specific applications to be AI then? I mean Google even refer to it as AI. My own work includes work with shallow neural nets and decision trees amongst other algorithms, and I would refer to those elements of what I work on as AI.

It's a slight improvement on the likes of Siri, Alexis and Assistant 1 then.
As I said, why haven't they been confident enough to do a live demo?
I would say more than 'slight'. It actually holds a conversation with people and understands the context of their comments even when those comments are unclear. Compared to current tech, and given the steps needed to move it on to the next level, I find it deeply impressive.
 
Deeply blue impressive?

That video was impressive in all fairness.. but who knows how apparently 'off the cuff' the responses were. I don't think a mimicking machine is much to worry about actually. They talk about 'waking up'.. which I don't think a machine can ever do.
 
Scientists have achieved the following:
  • Produced AI that can experiment with scenarios to produce unconventional and unique solutions to specific problems
  • Produced AI that can structure and execute other simple AI to interpret specific input parameters
I don't think it takes much of an extension beyond that where an AI could experiment with input parameters to produce unconventional and unique AI. What does 'waking up' mean? An AI that can restructure its own thinking based on experience? Is that not what such an AI would have achieved?
 
Scientists have achieved the following:
  • Produced AI that can experiment with scenarios to produce unconventional and unique solutions to specific problems
  • Produced AI that can structure and execute other simple AI to interpret specific input parameters
I don't think it takes much of an extension beyond that where an AI could experiment with input parameters to produce unconventional and unique AI. What does 'waking up' mean? An AI that can restructure its own thinking based on experience? Is that not what such an AI would have achieved?
It isn't restructuring it's own thinking though, is it? It's defaulting back to a human who is using the experience to add further programming. It's a UI.

When it can patch itself come back.
 
'Waking up' I've took to mean, is a machine acquiring consciousness. A subject of experience who 'chooses' what to do. Everything created so far has been programmed, the computer doesn't actually 'think'

This could change in the future, as we're just biological robots ourselves.. but our brains and perception etc is something a computer or machine is waaaaay off understanding/experiencing.
Get a certain amount of neurons firing in such a way and consciousness emerges.. that's my belief, and I think we're miles away from doing that mechanically.
 
It isn't restructuring it's own thinking though, is it? It's defaulting back to a human who is using the experience to add further programming. It's a UI.
In your opinion. The industry calls it AI and arguing about it is nothing more than an argument about pointless semantics. Call it what you like, you now know what the thread title refers to.

When it can patch itself come back.
I didn't say it is, I said the self-reprogrammability is what I think is the main missing link to 'self-awareness'. The building blocks are mostly there, but the links between are missing.
 
'Waking up' I've took to mean, is a machine acquiring consciousness. A subject of experience who 'chooses' what to do. Everything created so far has been programmed, the computer doesn't actually 'think'

This could change in the future, as we're just biological robots ourselves.. but our brains and perception etc is something a computer or machine is waaaaay off understanding/experiencing.
Get a certain amount of neurons firing in such a way and consciousness emerges.. that's my belief, and I think we're miles away from doing that mechanically.
That's precisely what I mean. If you get a neural network that can create another neural network (done already) and a neural network that is complex enough to make novel and unpredictable decisions based on predictable inputs by experimenting on solutions and testing the answers (done already) then how much is left to do before those neural networks can start to experiment with neural networks to achieve the intended outcome? If it knows the required result, produces a neural network that achieves it and replaces its own logic then would it not also most likely ultimately become self-aware?
 
There is probably some residue left.. Self-awaredness is impossilble to prove. Even the mirror test isn't definitive. People talk of philosophical zombies.. (I'm sure you're aware) - yet of course consciousness is still mysterious. Solipsism is untenable etc, but have you heard of the binding problem? I'm not saying a future man made quantum computer cannot possibly be conscious, but I don't think we're anywhere close to worrying about pain laptops feel..
 
Presumably you talk about mental pain rather than physical? Physical would obviously need some sensors to be developed specifically for that purpose, and who would do that? How do you define mental pain? At what point do the artificial instructions begin to conflict, and how would a self-correcting entity correct itself to handle those conflicts? If consciousness is measured by self-awareness rather than by mental pain and conflict then at what level of self-awareness does it become conscious? If the decision-making processes take 'self', and the resultant impact on 'self', into account as a parameter then is it self-aware?
 

Don't have an account? Register now and see fewer ads!

SIGN UP
Back
Top