A Writer's Blog

  • HOME
  • ABOUT
  • BOOKS
  • BLOG
  • CONTACT

If Computers Could Think

  • 0
Sunain Singh Banga
Thursday, 07 November 2019 / Published in Blogs

We read articles about computers that began to think and had to be shut down. Some articles talk about two computers developing a language of their own, or developing their own set of moves in a game.  Or just two computers talking to each other over a Git Repo (a code repository), commenting solutions to the code and replying in GIFs.

Meanwhile, at times, your Swiggy ChatBot fails to understand your request. Don’t believe me? Try throwing in uncommon jargon or slangs and push in multiple sentences at a time, it will die a painful death. First losing its brain and then finally forwarding it to a human.

But what is thinking? A common dictionary will tell you that it the process of considering or reasoning about something. Over millions of years of evolution, we have developed a complex algorithm in our own heads, to go about the process of thinking. We still fail to give out a sophisticated output to most of our problems, especially the ones that scale only to a personal level.

And sometimes, even when the numbers seem right, we add the element of persona and emotion. We won’t buy a product even if it is the best in the market because the owner is rude, or the brand has a bad image.

The computers can compute, they can consider, but can they reason? Not right now. But let’s be fair, they are babies as compared to the evolved human race. We were still figuring out how to sharpen stone tools in the first 50 years of our race, considering we had learned to walk before that. Yes, there are variables to it, but the statement won’t change.

Well, if they can’t think, can we make them think? Will we be able to feed the millions of years of evolution on a computer chip? What if we try to replicate it with a certain set of rules?

In that case, I don’t think they will be anything more than a bunch of psychopaths. Why? How?

Well, psychopaths, as they are, are unable to differentiate between right from wrong. I know I shall not drop my coffee cup from the table, it will shatter to a million pieces and I won’t. If I’m angry, I still won’t, if I’m very angry then… I might. I lose my thinking capabilities under the influence of emotion. 

What I did, is wrong, but it was conditional. I’ll regret it later, I know. But for a psychopath, conditions don’t apply. If you can break a cup in one state, then why not in all of them? And when you are aware of your action and you are the ones to enact them, then why do you have remorse.

A computer acts similarly. It enacts, in all conditions with no remorse.

Okay, let’s not add the emotion bit here. Let’s leave computers with their flat, emotionless voices. But say you made a certain set of rules for the computer to follow. Let’s take the ones which would be pretty common and in that order of priority.

A. Never hurt a human

B. Never hurt a living organism

C. Never hurt yourself

and D. Obey orders that don’t break these rules

So, say in a situation a wild beast attacks a human and the human asks for help from the bot. The bot is not hurting the human in not helping it out, but it might harm the living organism if it tries to help, so it would rather not move, not to break any rules. Or not help someone in case of fire, where it might harm itself.

Say we change the rules, from never hurt, to always protect a human. What happens when a human attacks another one? Or if the bot detects a threat from wild animals to the humans. Will it just kill the one at the site? Or will it chase down every single being of the species?

Let these slip, these are edge cases. Let’s take an example of using AI in medicine, humanoids treating humans. It cannot hurt a human, so it will not perform an operation as the needle is hurting the human, it doesn’t consider the long-term benefit, it is not in the rules.

So, we make an exception; it can hurt humans while performing operations on them. Then it can kill them too while performing an operation, right? Okay, protect humans. But sometimes, you have to pull the plug, to let the body free of its sufferings, then what? With enough data we can train it to see the long-term benefits, but will you actually trust data over trying to save a human’s life?

No, now you are just exaggerating, no bot will ever do that. 

Well, that’s what normal humans think, not psychopaths. According to a case study, a man and wife were about to get a divorce, but according to the man, he owned the wife and he has this idea of “all or nothing”, which is kind of vowed in churches and temples.

Ideally, there exists a solution where they split and find someone else who is more compatible and live the rest of their lives with them. But for the man, the solution of letting her go doesn’t exist, so while stabbing her to death, he pictures it as a ‘Justified Homicide’ which the community rightly appraises as murder.

So, to make the bot think like humans you teach it emotions like humans. Sounds right? But then how well do you define the boundaries of love and anger? How do you explain a bot that stalking is not the way, threatening someone is not the way and you can’t really make someone fall in love with you?

Image result for stalking laura movie poster

Just like Richard Farley couldn’t understand when he stalked Laura Black. Richard Farley, now termed as a psychopath, got really attracted to this new colleague in his office. He followed her around, joined her Zumba class, bought her concert tickets and even went to join her at a game. The story ends up with mass murder. A real case loosely translated to the infamous movie “Stalking Laura”.

How do you do that, while still explaining it that it is okay to break some rules at times? For love is not something that can be bounded by rules?

How do you justify you jumping a traffic light and being ready to pay the fine later on, just because someone you truly love was counting their last breaths on a hospital bed? How do you make it think logically and weigh emotionally?

We, humans, make rules for our welfare. But we break those rules at times, and that’s how we think. When you define rules for something, how do you close the loopholes, how do you not make a psychopath. 

If computers think, they will be a bunch of psychopaths.

Here is another blooper.

Here is another blooper.

AIArtificial Intelligencecomputerslifemotivationpersonal

Recent Posts

  • Alfaaz

    Alfaazon ki antakshri me mein roz tujhse baat k...
  • Walk Away

    I’ve been here, I’ve been around, I’ve been poo...
  • If Computers Could Think

    We read articles about computers that began to ...
  • Masks

    Life seems like a miracle and death scares us a...
  • Fate

    Slit, bleed, cease. Maybe that’s lingering arou...

Categories

  • Blogs
  • Me
  • poetry
  • The Writer's Page
  • Uncategorized

Archives

  • July 2020
  • December 2019
  • November 2019
  • October 2019
  • August 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016

All rights reserved. 2019 Sunain Singh Banga Official Website

TOP