Essayforme.org is your leading writing service

At our site you can find the best writing team, quality, talent and the lowest prices. We are the easiest and the most proficient variant to get your assignment done in a proper way within a certain deadline.

Order Now

Writing an Essay on AI Regulation

AI

I have already covered the pros and cons of the AI in the previous post. Here I would like to delve deeper into the topic and provide you with some interesting essay ideas in regards to the AI regulation.

Hopefully, by reading the previous article you already got some basic understanding to follow through. And even if you didn’t, you probably used SIRI (an example of weak AI) at least once in your life. Other examples include self-driving cars, therapeutic robot dogs, fraud detection programs and many others. I believe, I got you excited! In any case, let’s jump right into it.

Reasons for Regulations

Multiple reasons are cited in favor of AI regulation. First of all, it is its goal-oriented behavior. Basically, AI can be programmed to perform a certain task. However, it can get out of control if humans do not encode the situations in which AI has to shut down. For instance, if it is a weapon crafted to bomb certain areas, it will not be able to stop when spotting a school if this pattern of behavior was not included in the system originally. Similarly, nothing would stop AI from destroying a natural reserve if it stood in the way of achieving the goal.

Secondly, AI can potentially lead to a serious economical and societal crisis if many people lose their jobs within a certain period. This may sound Luddite, but it is already happening. Already today many menial jobs are performed by the robots. Great, you may think, I’ll just go get a college degree. The problem is that modern AI is encroaching upon the intellectual sphere as well outperforming people in chess, video games (see Dota2) and even surgical operations. New AI applications are being discovered all the time! Therefore, you cannot be certain that a profession of your choice is going to be relevant in the future (unless you want to be an AI researcher, of course).

AI may result in singularity and the subjugation of the human race. This may sound like a sci-fi plot idea, but that is a genuine fear even among the scientists. Singularity is a legit theory which states that at some point AI can become superintelligent and start improving on its own. Just like humans subjugated nature by being smarter and more adaptable, so can the strong AI. An American inventor Elon Musk (the guy who made Tesla and the Future of Life Institute) expressed similar concerns repeatedly. Maybe, just maybe, it is not all that impossible, and people need to take this possibility seriously.

Arguments Against

The biggest problem when it comes to regulating AI is the inability to determine what exactly needs to be regulated. The industry is still very young, despite the already tremendous discoveries. Moreover, it tends to be very unpredictable, like science in general. Just google accidental scientific discoveries to see my point.

The second hurdle which many scientists encounter is the lack of correct information regarding this phenomenon. Many ordinary citizens as well as politicians think of AI either as a scary robot invasion or a made-up science fiction rip-off which is still too far in the future to raise any concerns. Both of these approaches are not conducive to the establishment of any regulations. The former ones want to abolish it altogether as something inherently malicious, the latter simply don’t care enough. There seems to be no middle ground.

Finally, many fear that regulating AI will stunt its development. It is obvious that any rules slow down the process and create bureaucracy. Thus, many AI proponents treat those who support regulations with suspicion. (See Mark Zuckerberg vs. Elon Musk). Moreover, if no agreement is made on the global arena, the US may seriously lag behind its competitors and lose leading positions in science and innovation.

Arguments in Favor

The biggest argument in favor comes from Elon Musk as well as the old adage – it is better to be safe than sorry. Elon Musk maintains that it is profoundly better to have a slower, but more secure development, than a rapid, but risky breakthrough. Musk even mentioned in his tweets than AI is more dangerous than North Korea and nukes.

AI is often compared to biological and chemical weapons which are banned by the UN. Elon Musk wants to see an international treaty regulating the AI to be signed. He believes that AI has a kind of destructive potential which cannot be ignored, especially if in the hands of a wrong person. Thus, Musk affirms, there should be some kind of international agreement which should treat AI not just as a potentially benevolent force which would improve the quality of life tremendously, but also as a military threat.

The third and final reason is that AI is not as nuanced as a human being. It cannot change its behavior based on new information as easily as a person could. Thus, if given a full autonomy, it can do more harm than good simply because it cannot tell the difference between the two. AI has a goal and, if not programmed with stop signals, is going to persevere with it in spite of everything.  

Conclusion

It is true that telling for sure whether the AI should be regulated is impossible at the present moment. Its development is still quite recent and highly unpredictable. Thus, there is not enough theoretical knowledge as well as practical experience to say with a 100 % certainty that some subfields of the AI need to be controlled.  

Usually people tend to impose stricter rules only after discovering that something does not work. This is a general human nature – not to see the threat coming. However, scientists are special people trained to pinpoint such dangers before they actually hit. Therefore, maybe, it makes sense to listen to them in the first place. I wish it was that easy.

There is no consensus even among the researchers as to whether and to what extent the AI needs to be regulated. Even if there was – it is going to be extremely cost effective without any guarantee that it is actually going to help. That money could be spent on other, more feasible problems such as medical care and education decline. Naturally, no policy maker wants to advocate for something people can hardly relate to or even understand. Therefore, it is so hard to get any funding for AI regulations.

Hopefully, this article helped you formulate your own ideas. In any case, only the future will tell. 

Rated 4.4 | 34 votes.

Leave a comment:

Your email address will not be published.