Robots to be 'Electronic Persons'

Page may contain affiliate links. Please see terms for details.

whitenemesis

MB Club Veteran
SUPPORTER
Joined
May 7, 2007
Messages
19,889
Car
Lexus RX450h F-Sport with Takumi Pack 2020
"MEPs have voted to propose granting legal status to robots, categorising them as “electronic persons” and warning that new legislation is needed to focus on how the machines can be held responsible for their “acts or omissions”. "

Tesla cars to be next???? :dk:

EU to vote on declaring robots to be 'electronic persons'
https://news.google.com/news/amp?ca...ml?amp#a-2a1f929b-4706-450a-90d5-d88765eb23e3
 
"Has your robot caused any third party to make a claim against you, the owner? If so call us on where there's a blame there's a claim..........bla bla bla. I see the future already.:doh:
 
The world is inching, voluntarily, towards a Skynet world. But at the macro scale of human existence on earth, were racing towards it.
bb887bf3c896abdeb6cbaa9e2f415fef.jpg
Robots are being considered for any task they can realistically do. First at work, now at home. One day, I'm sure, people won't be paid to do to work - we'll be paid to stay away from work.

We'll focus on leisure activities, well out of the robot's way so they can give us the productivity, comfort and convenience humans crave. Once the interconnected robots decide it makes more sense living for themselves and not in the service of humans, it's game over.
 
Until we have unlimited, free energy I don't see any major search change in the way humans live and work. Just the work we will be doing will be more and more cerebral.
 
It is reassuring to see that, the EU lawmakers, having resolved all the insignificant problems their union is faced with, turn their attention to something that really matters.
 
Unless they can prove the robots have become self-determining, modifying their behaviour beyond that expected by their programers, then surely the persons programming them should be held responsible?
 
When I was but a lad there was a prog on the telly called A for Andromeda in which the plot centered round a radio signal received from a far off galaxy which gave instructions on how to build a computer which built a robot. This 'thing' then proceeded to take control........... :eek:

https://en.wikipedia.org/wiki/A_for_Andromeda
 
Unless they can prove the robots have become self-determining, modifying their behaviour beyond that expected by their programers, then surely the persons programming them should be held responsible?

We had a very interesting discussion, a few years back on TORR forum, with Nathan Haselbauer on singularity and prospects of a true AI ... Sadly, he won't be around to see even rudimentary progress in these matters (and I do not count EU double-digit IQ brigade's creative writing as progress).
 
It's weird esoteric stuff indeed but we should not be misled by the usual sensational journalistic approach and pictures of cute robotic humanoids. What this is about is how exactly robots figure in the legal system as it relates to humans. How "responsible" they are needs to be defined somehow. Take the robots that weld our Mercedes cars together. Lets suppose they produce defective welds that result in mechanical failure and death for those driving said cars . Who is then responsible? The company who manufacture the robots, the people who wrote their controlling software, the people who installed them in the factory and programmed their specific production line operation, the people who maintained them- who exactly ? The robots themselves? Are they sentient enough to be responsible for their actions do they have "choice " could they be regarded in the same way as the criminally insane i.e. can never be found guilty - on the grounds of insanity. :eek: Are they technically Mercedes Benz " EMPLOYEES" with rights i.e. to be adequately maintained/programmed so as not to kill people. Another scenario An employee enters a restricted area where robots are active and is decapitated because a safetycage latch was faulty or had been tampered with - possibly even by the unfortunate employee that was killed. The grieving widow decides to sue for the loss of her husband to be told "the robot dun it" . and no its not a company employee its just there. I don't think this is about " robot rights" at all its more about maintaining "human rights" in a world that's going to be populated by them more and more. For that to happen their presence as individual entities may require to be defined in law.

Isaac Asimov's three laws of robotics says much about robots but not much about the complexity of the present legal system.
A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. ;)
 
Asimov's robots were all self aware, self-determining and self-defining capable of creative thought. We are nowhere near that level of independence in the robots we currently use.

With regard to the assembly line robot in your example, clear and simple, it is the manufacturer using that tool that carries the responsibility for the failing welds.
 
Regardless of how advanced the AI is, each robot is still following a set of definitions created by their programmer.

Until a robot can break their programming and 'think' for themselves, they would not be self-aware, and capable of taking responsibility for their actions.

You could create a robot that has an ability to 'learn', but again you'd have to prove that the learning behaviour of the robot would be unique to every other robot placed in the same situation.

If you imagine us humans as biological robots, every single one of us thinks completely different from the next one, which makes each and every one of us unique. Even twins will diverge into two separate entities, thinking differently, even if they look identical.

Until a robot can demonstrate that it 'thinks' uniquely, how can we say it has artificial intelligence and is not simply following a set of definitions to define its behaviour?
 
I am quite blasé about these 'advances' in ai as they still seem to be making a computer do one thing really well.

Great. A computer can beat a chess grandmaster, but can it also write a novel, cook a meal, cry at a piece of music?
 
Computers are being taught to think for themselves, one very active field (that the company I work for is also involved in) is machine learning. Typically, because of the complexity these are not single computers, but clusters of them operating as an entity to provide support advanced, "better" thinking than humans can provide.
Personally, I think that the instructions humans give are limiting to the machine and all the while they are leaning within the designed area that's fine. But when they "learn" and decide to also apply their thinking outside the confines to areas that humans have not decided, then that's the tipping point.
 
This thread reminds me of the episode of Star Trek:TNG, whereby Picard has to protect Data's daughter from being dissected by a computer engineer who wants to find out how the daughter's Positronic brain works.

Anyone ever read 2000 AD? With Judge Dredd and other characters. There are two robots who feature heavily. One is a rubbish droid while the other is a fighting droid. Both have had their share of experiences, especially during the Volgan War where Hammerstein works with his human Sargeant to attack Volgan soldiers.

Of course, this is all fiction, but history is full of tales of fiction that eventually became reality, such as rockets, helicopters and submarines. AI is the next big thing, with machines capable of making rational decisions, without the hindrance of human emotion clouding it viewpoint.

(In the film, I Robot, the character played by Smith is rescued by a robot because the robot works out the probability of rescuing Smith's character greater than that of the little girl who is also drowning. A human would have favoured saving the little girl even though her odds of survival were less).

Our opinions are often influenced by our perceptions, rather than clear facts. A machine would assess using a different set of parameters.

One final point worth noting, and that is a machine won't be affected by Ego. Ego is mankind's biggest downfall. Wars have been fought because of Ego. Men fight over women because of Ego. Ego makes us overly competitive, which can be negative to our overall welfare.
 
Wouldn't ego be a measure or indicator of being self aware? Ego per se isn't a bad thing, only when it becomes inflated, when one's perception of self importance dominates one's thinking?
 
Wouldn't ego be a measure or indicator of being self aware? Ego per se isn't a bad thing, only when it becomes inflated, when one's perception of self importance dominates one's thinking?

Ego is fine when you practice tolerance, but some people have way too much Ego.

I love watching Tipping Point when Ben Shepherd mispronounces words, and even when the contestant pronounces the word correctly, he will correct them by repeatedly mispronouncing the word, as if it is his right.

There was an episode of Bullseye where the question was, 'Who plays Gail Tilsley'. The contestant answers 'Helen Worth', to which Jim Bowen responds, 'I'll give you that, but her name is Ellen Worth, not Helen Worth, and if she is watching now, she would be most annoyed'. This is a classic case of someone becoming cocky because of their Ego. Turns out that her name is Helen Worth, and the contestant was made to feel small in front of millions of viewers.

Most of us live normal balanced lives, but in normal society there are those who have such an inflated Ego that we suffer their bragging and abuse. We have all worked for a Boss who is brash and arrogant some time in our lives, often wondering how they managed to get where they are.
 
The tipping point is where robots become autonomous -- given a degree of freedom of action. This does not have to be much to have far reaching consequences.
Back to the robots on the assembly lines analogy .
At present we hear that for certain operations on the Mercedes production line involving installing trim and the dash board etc there are so many options that automation is not appropriate and that humans are better at the task. Its conceivable that by giving assembly robots more decison making they could do this eventually too. At the same time it would be beneficial to allow them to modify their actions to fine tune their operations to improve production line performance. Another area where efficiency might be improved is in robot down time for maintenance so rather than hamper them with regular scheduled maintenance they become "self aware" By monitoring their own performance they are given the ability to decide when they need maintenance/lubrication/recalibration etc - down time is reduced and everyone is happy.


Then 6 months down the line cars are reported with dash board fires :eek: --- this found to be to faulty installation by robots . When their software record is downloaded its found that the robot decided that in order to maintain optimum assembly throughput it was deferring maintenance downtime and thus gradually drifting out of calibration. It had taken its own decision between two imperatives it had been given because the consequences of the interaction between the two had not been predicted by humans. That's because often decisions are made as a complex choice between competing priorities. Humans who program robots are faced with same sort of complexity. Often they get it wrong. Nobody in their right mind would set off to sea in Ro/Ro ferry with the bow doors open would they? Try Herald of Free Enterprise. The recent tragic air accident in South America where a jet carrying the Chapecoense football team crashed because it ran out of fuel- this despite the pilot being advised to land before that to refuel on the way-- and he might have got away with it had it not been forced to circle due to another aircraft been given landing priority.
So that's where the tricky bit comes in for the factory robot assembling the dashboards. Its been told to optimise assembly line throughput and at the same time given autonomy for self maintenance----leading to a clash in priorities. If autonomous robots are programmed by humans who make mistakes despite their best intentions ---- then its likely that given autonomy they will do the same surely?
Its easy to dismiss such things but you just have to read Richard Fenman's report to see how the best engineering and project management minds can get things wrong.
https://en.wikipedia.org/wiki/Rogers_Commission_Report#Role_of_Richard_Feynman
https://en.wikipedia.org/wiki/Rogers_Commission_Report#Role_of_Richard_Feynman
 
If people are interested in this, have a look at Sam Harris' perceptive talk https://www.ted.com/talks/sam_harris_can_we_build_ai_without_losing_control_over_it

On a connected note, one of the issues exercising those involved in self-driving vehicles (whether design; legal; insurance; programming etc) is around decision making in the event of a crash. How should the vehicle respond - save the life of the driver (or owner?) at all costs, even if that means crashing into a bus stop of small children. Would that change if the machine knew the driver had terminal cancer? There are lots of other potential scenarios. Should your car exercise your or its own moral judgement?
 
Last edited:
The car wouldn't have, or even comprehend, morals. It simply follows the decision tree the programer installed. It wouldn't be frozen by fear or indecision.

The final responsibility should always be with the driver, even if he didn't make the final decision..
 
I wonder if their parents will be held responsible until they are 21...?
 

Users who are viewing this thread

Back
Top Bottom