New Poll Reveals Concerns About Manipulated News And Information

Google+ Pinterest LinkedIn Tumblr +

by Daniel Johnson

In May, Wired announced that it would be tracking the rise of artificial intelligence in elections worldwide and they cautioned that the propensity of tech platforms, like Facebook, Twitter/X and YouTube.


According to a poll from a media watchdog organization, 79% of Americans are concerned that news they read online contains manipulated or fake information designed to confuse people. In addition to this, 76% of people surveyed expressed concern that they are being fed news about the November presidential election that cannot be trusted. 

As the Philadelphia Inquirer reported, a survey from Free Press, in conjunction with efforts from the African American Research Collaborative and BSP Research, polled 3,000 Americans concerning their confidence in news gathered from online sources. According to the polling data, most Americans, but especially Black Americans, get their news from Facebook and YouTube, two sources that do not necessarily vet information before it is posted on their sites. 

According to Marc Temple, the executive director of the Philadelphia chapter of Concerned Black Men, a mentorship non-profit focused on Black youth, Black people need to be vigilant about potentially spreading fake news. “Folks are using social media as research, but they’re not then going on to research the research,” Temple said. “And now, with artificial intelligence out there with all that false information, you can get egg on your face if you only read Facebook and don’t do your homework.”

Likewise, Timothy Welbeck, the director of Temple University’s Center for Anti-Racism and a professor of African American Studies, told the outlet that Facebook is a popular platform for Black people because it encourages the building of community bonds. “People who are marginalized in everyday life find like-minded communities on social media. Black people understand that disinformation is a tool to disrupt social progress,” Welbeck added. “It’s been used against them to disrupt access to voting or to encourage apathy about voting.”

As BLACK ENTERPRISE reported in March, Trump supporters used artificial intelligence to fake photos of the former President with groups of Black people. These photos were disseminated online and managed to convince some Black people that Black support for the now-felon was increasing. 

In May, Wired announced that it would be tracking the rise of artificial intelligence in elections worldwide, and they cautioned that the propensity of tech platforms, like Facebook, Twitter/X, and YouTube to disseminate misinformation and disinformation, scams, and hateful content made them a veritable breeding ground for the amplification of those issues. 

According to their tracker, In the United States, in addition to the fake photos of Black Americans generated by AI, there have been several other instances of artificial intelligence in the political sphere: A fake robocall featuring the voice of President Joe Biden encouraging voters to stay at home, a deep fake pornographic video of New York Congresswoman Alexandria Ocasio-Cortez, AI-generated pictures of President Joe Biden in army fatigues, a deep fake video featuring Arizona Republican Senate candidate Kari Lake, a deep fake video of Joe Biden saying that Russia has occupied Kyiv for a decade, and a Democratic candidate for Congress using AI in political advertising campaigns. 

Wired reporter Vittoria Elliott, who pitched the AI in elections project, described her concerns about artificial intelligence in the 2024 election on an episode of Wired Politics Lab, “Social media companies have been struggling for years with how to deal with the sticky issues of elections and politics, particularly around mis- and disinformation. Now, we’re adding a new layer on top of that, which is generative AI. It’s really tough right now because deepfakes, video fakes, things like that are really obvious sometimes. But that is just the tip of the iceberg. That’s just the stuff that is all more evident to us.”

Elliott continued, “Well, more legitimate companies like Midjourney and ChatGPT, OpenAI, Google, et cetera, they’ve said, ‘We’re going to put guardrails on. We’re not going to allow generating political images.’ ChatGPT, which is text-based, they’ve said, ‘It’s not cool to use our tool to generate political stuff for campaigns,’ or whatever, ‘You can’t run a chatbot on top of our interface,’ basically. But they’re not doing great at enforcing it.”

RELATED CONTENT: UPenn Becomes First Ivy League College To Launch AI Degree Programs

Source link

Share.

About Author