Skunk was one who regarded robots as mere tools to be used and discarded when no longer useful. He is among the majority of Westerners who because they believe that humans possess imortal souls, robots, no matter how intelligent, no matter what human-like behavior they might have, are still soulless machines, and unworthy of
human rights.
A bomb on a child. Unless I am mistaken Saddam used children as expendable minefield clearers in the first Gulf war. I read some rather sad and disgusting things about Josef Mengele's 'research' on twins during WWII.

It should not surpise anyone that a childlike robot should be treated thus.
Now, to try & get back on topic:
I was speaking about a machine's inability to distinguish between rules that are local customs and rules that are universal,
mala in se vs.
mala prohibita. Although I suppose that anything can be programmed into a computer. My whole discussion relies upon an assumption that may itself be unfair.
A newborn child is completely helpless, Atom and Uranium entered the word, and within mere minutes were out running around having fun and adventures. Atom, however, was equipped with weapons (which ones are dependant upon which incarnation), and was therefore, leathal at birth. Yet, his mistakes as depicted in the 80s series, were limited to innocent misuse of his physical strength, in breaking the leg from the chair and breaking the window. I believe the was a bird outside, to which Tenma pointed, saying 'bird'. Atom could have vaporized the bird if not for his programming that prevented such innocent mistakes.
Therefore, I assert that there was such programming, whose purpose was to prevent any such innocent acts of destruction. Would it have been fair to burden a child with such rules intruding upon his stream of consciousness? I say, 'no'. The rules would have been subconscious, so that they did not overburden Atom's thought processes. He would not intend to vaporize the bird, so why should we make him think about it? I should probably elaborate, but tough. I am moving on.
These rules existed in his programming to protect humans, as though Asimov's 3 laws were inherent in robots. In other words, when creating a robot, it will by virtue of its being a robot, be equipped with these rules. Only with the
Omega Factor, could Atlas circumvent a robot's inherent subservance to humans. I assume that any rules of the
mala prohibita species would be learned by experience, while only these were programmed into him from the start. Nevertheless, in a machine's representation of rules, can it treat those it learns by experience as any less matters of black & white, 1 & 0, 'yes' & 'no', than those inherent rules?
Robot Law:
Thou mayest not leavest thine country withoutest permission from the government, amen.
How would there be any exceptions allowed?
If(good_enough_reason()) then I_may_violate_this_rule;
good_enough_reason(reason) // This function determines if a reason is good enough to allow violation of a rule that otherwise has no gray areas
Sorry for the 'C' code, perhaps I should have used pseudocode.
For that exception to exist, it must have been there from the start. Thus, the robot's abilty to acquire new rules, must allow for exceptions to these new rules. Perhaps my programming knowledge complicates my analysis, but tough. Ithink I just defeated my earlier argument. For a robot or human to function, it must be able to acquire new rules and abide by them.However, there must be exceptions to some rules, but which ones?
When a guy you really loathe comes up and greets you, you would like to say, "get lost, #%%#", but instead, you wanting to make the encounter go smooth, say "good to see you, again, you %$$@#." Lying is a gray area, that both Asimov's robot (in one story) and Atom used to spare the person's feelings. Hence, Atom was able to deal with gray areas, while True was not. To True, if lying to spare the girl's feelings was ok, then so was making false reports about earthquakes.

Nevermind that the girl had been blind for some time and for that same time, True had been lying, before Atom and classmates came to visit

.
So, anyway, Atom's AI was far superior to True's because Atom could deal with gray of fuzzy areas, while True could not. So, notwithstanding that ones and zeroes represent all data to a computer, and hence, a robot, some robots can indeed distinguish between
mala in se and
mala prohibita. :wahah: