The start of every Hollywood movie on artificial intelligence (A.I.) consists of a horrifying image: a self-learning program, becoming smart enough to overtake the human population. Hollywood is not known for its accurate representations of technology, and its movies are rarely ever bounded within the laws of physics. So have we all been misinformed on what artificial intelligence is? In essence, artificial intelligence is just machine-learning. The term comprises of human knowledge — the “intelligence” part — and the code written by humans to self-learn, “artificially.”
It is only a computer confined to the amount of knowledge that humans input into it, with its benefits outweighing its inherent flaws and inhuman characteristics.
Herbert Simon, the pioneer of artificial intelligence, had raised the inherent risks of artificial intelligence before. In 1965, he made a bold prediction that “machines will be capable, within twenty years, of doing any work a man can do." However, it is the lack of understanding of A.I. that has spurred Tesla CEO Elon Musk’s warnings about A.I. — the same warnings mimicked fifty years ago. Face it, artificial intelligence will never reach the capacity of exercising a human takeover witnessed in Hollywood films or exaggerated by the media. It is only a computer confined to the amount of knowledge that humans input into it, with its benefits outweighing its inherent flaws and inhuman characteristics.
Clickbait titles and the media’s need for increased viewership has hijacked the subject of A.I.. It has instilled widespread fear about the issue and misinformed the public as well. Nowadays, the media’s sole purpose is to generate profit, not to inform the general public on crucial matters. Thus, artificial intelligence has been portrayed as an impending doom. Take the title published by The Independent on July 31, 2017: “FACEBOOK’S ARTIFICIAL INTELLIGENCE ROBOTS SHUT DOWN AFTER THEY START TALKING TO EACH OTHER IN THEIR OWN LANGUAGE.” The title implied that Facebook’s artificial intelligence had reached a level of intelligence where it had created its language. Facebook’s original goal for the program was to negotiate between each other for balls and toys, similar to how toddlers trade toys in preschool. When the project derailed as the agents “talked” in their language, it was allegedly shut down. The agents created their shorthand for English. By no means was a new language formed. They communicated with commonly used words — such as “I,” “me,” and the word “five” — and still followed grammatical English conventions.
The media attacked the project instead of revelling in the newfound knowledge that there are ways to communicate more efficiently. There were many guidelines preset by Facebook’s programmers for the negotiating agents. However, no restrictions were put to limit the language used. Though The Independent accurately reported the incident at Facebook within the article, their title has misled the general public—who frequently only read the title. It goes to show that the media either overlooks the issue or does not comprehend the fact that programmers control artificial intelligence and machine learning.
The entire field of machine-learning has limits and rules set by humans to ensure that the program does not create preventable risks and problems for companies such as Facebook. Artificial Intelligence is nothing more than a computer with incredible processing power. It needs to be taught by humans on how to think, what is essential to analyze, and the range of solutions it could provide.
Understandably, the media does not exaggerate all of its skepticism regarding A.I. The most significant risk that emanates from A.I. is its ability to multiply errors coupled with the lack of understanding from computer programmers on how computers learn. A Business Insider article from July 1, 2015, showed that Google in the same year incorrectly tagged two African-Americans as gorillas through its photo algorithm. Google immediately issued an apology and disabled its algorithm to categorize primates as gorillas or monkeys. The way Google handled the problem showed the most significant hazard that has arisen from machine-learning: the “black box” phenomenon. A “black box” is a figurative “box” within an A.I. program where machine-learning occurs. The only issue? It is near impossible to decipher what is happening inside the “box.” It would take an incredible amount of computer programmers and human resources to read through the code, line by line with millions of new lines written by the program every single second. Since Google’s programmers could not enter the “black box” and pinpoint the issue due to technological or financial limits, words associated with primates were banned.
To this day, this issue has not been solved. A modern computer’s processing speed furthermore escalates this issue. Illustrated through Moore’s Law, a computer programming law invented by Gordon Moore, shows that every year the overall computer processing speed will double. Thus, the capabilities, storage, and processing power of computers grow exponentially. Hypothetically, a computer program would take one second to code two thousand lines of code back in the year 2000, would now take only approximately 4/100,000th of a second in the year 2018 (you could blink over 8,000 times during that split second!). Every second, astronomical amounts of code is processed, taken in, and churned into new knowledge. Even a small, unintentional mistake caused either by Google’s coders or by the computer itself can grow exponentially within the system. Because A.I. is just a computer, it absorbs what we teach. Even one small mistake can wreak havoc on the program as it capitulates the issue into millions of problems.
Source: ThinkStockPhoto
Nonetheless, A.I. will create new opportunities for humanity. It will spur changes in the way society interacts and communicates as technological progress increase in unprecedented ways. The rapid growth of A.I. in recent years is relatable the Internet during the late 90s. It was impossible to imagine companies like Blockbuster, posting profits upwards of $450 million in 1995, becoming bankrupt 20 years later. It was also unimaginable to predict companies with the power and scale of Amazon, or Netflix, during the dawn of the internet age.
Artificial intelligence could follow the same trend of improving the public’s lives. The boon of modern A.I. is unprecedented, making it impossible to imagine what society in the future will make of it. The uncertain future of A.I. is the most beneficial part of it. Once the technology gets going, it will never stop. A.I. will be harnessed by a new company with similar impact as Google or Facebook with the Internet. This new company will create and provide user-friendly platforms for the public. Through these platforms, everyone can access the power of A.I. and use it their lives. A.I. will improve the field of robotics, drastically preventing lives lost, or change the landscape of the medicinal field, where A.I. could learn how to combat diseases and make new drugs. Like the Internet, the world is limitless with A.I.
The quality of life of the average consumer in the United States has increased dramatically since the advent of the Internet. Artificial intelligence could follow the same trend of improving the public’s lives.
Although there is no upper ceiling for A.I., we need to be cautious about the speed of its growth. A.I. development must slow down to allow for an ample amount of time for industries to help people in the labour force who are at risk of losing their jobs transition into new careers. The government must set regulations to ensure that companies do not take extremely risky shortcuts when it comes to A.I. development and growth. As seen with error magnification, even the slightest of mistakes can cause detrimental harm.
Lessons also must be taken from the rapid expansion of the Internet. Companies developing A.I. should offer non-profit programs that help people who lose their jobs due to this new technology. These programs would teach individuals whose jobs are at risk to garner skills needed to find new jobs or help them start a new career path. Social programs are necessary as income inequality is tearing up our society the same way it had teared up societies in past technological revolutions. The same pattern is showing. Shown predominantly in developed Western nations, we are shifting away from liberal values. Excessively, people and nations are gravitating towards autocratic, strongmen states to provide stability within the now occurring “Fourth Industrial Revolution.”
Like the Industrial Revolution or the use of automation in factories, people are scared. A.I. will create a tumultuous decade ahead, as panicked people across the world fear losing their jobs and fear a world that provides less opportunity for their children. The ones directly involved with A.I., the investors, the stockholders, and the executives, will climb to the top and create a greater chasm between the have and the have-nots. The government at this time needs to be agile and vigilant to prevent social unrest.
The issue of A.I. is tricky; a fine road must be taken to ensure that everyone receives the benefits achieved by it. Despite the uncertainty, people should have an open mindset about A.I. There needs to be an understanding that one must weather the risks to embrace the benefits that A.I. will bring.
Just last year, the Government of Canada invested 125 million Canadian Dollars in A.I. technology. I do not commend this, because at the end of the day, the government should invest in people, not technology.
Comentarios