AI-Banner

What exactly is AI?
Artificial Intelligence, or popularly known as AI, is all around us. Unknowingly we don’t realize, but AI surrounds us. Starting from SIRI to the new concept of self-driving cars, AI has taken over our lives, and we have happily given them in the name of technological advancements. The fiction of science as portrayed or positioned AI like a robot that has characteristics similar to that of humans. AI has the capacity to encompassing everything right from IBM’s Watson, Google’s search algorithms, to autonomous weapons.

AI today is known as weak AI or narrow AI, which is primarily designed to perform a small task such as only driving cars, facial recognition, or only internet searches. The long-term goal of every researcher is to shift from narrow AI to general AI or commonly known as strong AI. The narrow AI is assumed to outperform the humans at any task, which is specific to it, which includes solving the various equations or playing chess. Again, AGI is expected to exceed human capability in every job, which is highly a debatable topic.

AI-Image-1

The Ultimate goal of AI:
Shortly, the ultimate goal of AI is to benefit the society in every possible way. The purpose of keeping AI intact to the community is to help the researches in many ways right from the law, economics to various technical topics such as control, security, validation, and verification.

Can AI be dangerous?
Researches, who spent hours and hours on researching about AI have something in common to agree, and that is that a super-intelligent AI will not be able to exhibit human emotions such as hate, love, care, jealousy, etc. and there forms not a single reason to the AI to become malevolent or benevolent knowingly.

As per the experts, there are two scenarios which can lead to the danger of AI, these are:

  1. AI is programmed in such a way that can do something devastating: The artificial systems are autonomous weapons that are scheduled to nothing but KILL. If these weapons reach in the hands of the wrong person, these weapons will have the capacity to cause mass casualties. A race of AI arms could eventually lead to an AI war, which will again result in mass causalities. To avoid such a scenario where the enemy thwarts one, the weapons would need to be designed to be rough and tough, which will ‘Turn Off’ hence leading the humans to lose control of such a scenario. There is also a risk that is present with narrow AI as well, which grows as the levels of AI intelligence and increases autonomy.
  2. The program of AI could do something beneficial, which can lead to a destructive method for achieving the goal: This is a situation that can happen anytime when we fail to align the purposes of the AI with our objectives, which is no cakewalk task and takes time. For example, if you happen to ask an intelligent car which is very obedient to take you to the nearest shop as early as possible, then you might reach the venue by chasing the helicopters, which would be covered in vomit. This is a classic example of the car doing exactly what you had asked for it but not doing what you wanted. If such as super-intelligent system is aligned with an ambitious geoengineering project, this might create a lot of substance and havoc within the ecosystem like its side effects, which can be dangerous and not a very pleasant situation.

All the above information shared are examples that have been stated to show the concern about AI, which is competence but not evil. No doubt in saying that a super-intelligent AI will be a boon to humanity and will be the best at accomplishing every goal assign to It, but in case if our goals are not aligned to their goals, then that is a situation of problem and concern. One of the critical goals to achieve AI safety is to make sure that humanity is never placed in the position of ants. Hence, let humanity to its task and let not AI take over each and everything around.