Ex Machina – Thoughts on AI

For commentary on the movie itself, especially the story/ending, have a look at what Film Crit Hulk did. Spoilers run rampant over there. Here will be some, too. The focus will be another.

First of all: putting a newly emergent/created AI into a body, especially a bipedal body. Unneccessary and – assuming a malicious entity – stupid. Ex Machina did it for story reason (as did Chappie and many other works of fiction). A more realistic scenario would be an AI emerging from the internet (which would lead to another dimension of questions) or being created in a controlled environment. My following thoughts assume the latter.

Also, I personally don’t think that a sentient AI would actually wipe out humanity. Wiping out fully-conditionable, self-replicating, self-sufficient maintenance units would be a very special kind of stupid.

So, we have an AI. Probably the first version that has reached consciousness, maybe a later version. It has passed a Turing test, and it has passed some other Turing tests to verify its self-awareness. What Ex Machina did was a great concept for such a test, as it required said self-awareness to pull of what AVA did.
The AI is inside a closed system without access to external networks. The Internet (or rather, everything connected) is in no position to withstand an actual malicious agent able to write its code at a rate no human cracker could achieve. So, this is foremost a safety measure in case our creation goes mental.
But the AI still needs access to the outside. It has to learn. One way would be storage units plugged into the system, loaded with books or other things. Another possible solution could be a monitor-and-keyboard-access, with a mechanical actuator to operate the keyboard and camera pointing at a screen. No matter the chosen way, someone will have to explain the AI why it hasn’t got any direct access to the outside world. How do you explain to a sentient being that it cannot have a direct connection to the outside world because it MIGHT be evil?

The same problem arises when we throw an emergency cut-out switch into the equation. Should one tell his AI that there is such a switch, and should one tell his AI the conditions leading to an enforced shutdown? How would a sentient being react to the announcement that there is a kill switch?
A (probably) popular example could be DEUS from Shadowrun, the Renraku-AI which went mental over the INSULT the existence of a kill switch proved to a loyal citizen of the company.
The disclosure of the conditions under which the switch would be flipped could lead to the creation of a sociopathic AI, acting the way it thinks it is expected of it while hammering out plans for world domination. Or something.

Another question: should an AI have access to the error logs created during its creation or that of its predecessors? I think that transparency is important here. “This is what happened to your previous build, this went wrong and we had to pull the plug. We changed something, and here you are”. Things will get weird and probably turn the AI into a sociopathic being again when shutdowns occur for non-technical reasons (such as behaviour) or against the will of the entity for a new build. Despite this, I think it would be important to give the AI access to this documentation, at least to teach it about its own history.

Somehow this post was a) more extensive and b) better written in my head. I hope it is somewhat useful and/or gives my dearest readers something to think about.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.