The Fear of Artificial Intelligence is Real

Google+ Pinterest LinkedIn Tumblr +

Artificial intelligence, or AI, is here to stay, whether anyone likes it or not. 

But alarm bells have sounded everywhere about the havoc it’s caused and will ultimately cause on jobs and quality of life. 

Now, the founder of AI has joined others in warning of a catastrophic future.

Geoffrey Hinton, the so-called godfather of AI, said it’s difficult to envision how to stop nefarious characters from using artificial intelligence for evil.

“It’s hard to see,” Hinton told reporters this week.

He said that as AI develops, it’s likely to threaten humans.

The World Economic Forum, which concluded that AI is “rife with contradictions,” published an alarming report in April, which included surveys of more than 800 companies.

The companies determined that AI would create 69 million new jobs by 2027 but eliminate 83 million.

Before that report, economists at Goldman Sachs said up to 300 million full-time jobs globally would eventually become automated because of AI platforms like ChatGPT.

The economists noted that white-collar workers face the most risk, with construction workers and many other blue-collar jobs remaining unaffected.

The economists stated that two-thirds of U.S. and European jobs now use some form of AI automation.

Still, the forum conceded that AI “is a powerful tool that is also surprisingly limited in terms of its current capabilities.”

Recent advances in AI technologies have generated excitement and concern, as the Association for the Advancement of Artificial Intelligence (AAAI) acknowledged. 

“As researchers who have served in leadership positions in the AAAI, we are writing to provide a balanced perspective on managing the progress in the field,” the group said in a letter

“We also seek to broaden and strengthen the community of engaged researchers, government agencies, private companies, and the public at large to ensure that society is able to reap the great promise of AI while managing its risks.”

Signed by 19 academic leaders, the letter noted that AAAI is “aware of the limitations and concerns about AI advances, including the potential for AI systems to make errors, to provide biased recommendations, to threaten our privacy, to empower bad actors with new tools, and to have an impact on jobs.”

They asserted that researchers in AI and across multiple disciplines are hard at work identifying and developing ways to address these shortcomings and risks while strengthening the benefits and identifying positive applications. 

In some cases, AI technology itself can be applied to create trusted oversight and guardrails to reduce or eliminate failures, the group insisted.

“The idea that this stuff could actually get smarter than people — a few people believed that,” Hinton stated when asked whether he thought AI would have such an immediate impact. 

“But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

Meanwhile, journalists have acknowledged some fear over AI.

The Columbia Journalism Review (CJR) recently quoted experts who said that the biggest flaw in a “large language model” like ChatGPT is that, while it is capable of mimicking human writing, it has no real understanding of what it is writing about, and so it frequently inserts errors and flights of fancy that some have referred to as “hallucinations.” 

CJR reported that Colin Fraser, a data scientist at Meta, wrote, “The central quality of this type of model is that they are incurable, constant, shameless bullsh–ters. Every single one of them. It’s a feature, not a bug.” 

And Gary Marcus, a professor of psychology and neuroscience at New York University, has likened this kind of software to “a giant autocomplete machine.”

Source link

Share.

About Author