Skip to main content

Coldplay Confirms India Tour In 2025. Details Here

British rock band Coldplay will be performing in India as a part of the Music of the Spheres World Tour in 2025. The band will reportedly perform in Mumbai. More details are awaited. However, Bookmy show unveiled a short teaser of their upcoming concert on Wednesday on its Instagram profile. The post showcases a motion image of the announcement of Coldplay's performance in Mumbai. The post drew instant reactions. A fan wrote, "Let's goo!!!" Another fan wrote, "Wohooo!" Another comment read, "Can't wait to be there with my yellows for life." Take a look: View this post on Instagram A post shared by BookMyShow.Live (@bookmyshow.live) Coldplay previously performed in India in 2016 as a part of the Global Citizen Festival in Mumbai which also had a lineup of several other music artistes and bands. The festival was held to promote global development goals. The upcoming concert will mark Coldplay's

White House "Alarmed" After Taylor Swift, Joe Biden Deepfakes Surface

Deepfakes generated by artificial intelligence have proliferated on social media this month, claiming a string of high-profile victims and elevating the risks of manipulated media into the public conversation ahead of a looming US election cycle.

Pornographic images of singer Taylor Swift, robocalls of US President Joe Biden's voice, and videos of dead children and teenagers detailing their own deaths all have gone viral - but not one of them was real.

Misleading audio and visuals created using artificial intelligence aren't new, but recent advancements in AI technology have made them easier to create and harder to detect. The torrent of highly publicized incidents just weeks into 2024 has escalated concern about the technology among lawmakers and regular citizens.

"We are alarmed by the reports of the circulation of false images," White House press secretary Karine Jean-Pierre said Friday. "We are going to do what we can to deal with this issue."

At the same time, the spread of AI-generated fake content on social networks has offered a stress test for platforms' ability to police them. On Wednesday, explicit AI-generated deepfaked images of Swift amassed tens of millions of views on X, the website formerly known as Twitter that is owned by Elon Musk.

Although sites like X have rules against sharing synthetic, manipulated content, the posts portraying Swift took hours to remove. One remained up for about 17 hours and had more than 45 million views, according to the Verge, a sign that these images can go viral long before action is taken to stop them.

Cracking Down

Companies and regulators have a responsibility in stopping the "perverse customer journey" of obscene manipulated content, said Henry Ajder, an AI expert and researcher who has advised governments on legislation against deepfake pornography. We need to be "identifying how different stakeholders, whether they are search engines, tool providers or social media platforms, can do a better job creating friction in the process from someone forming the idea to actually creating and sharing the content."

The Swift episode prompted fury from her legions of fans and others on X, causing the phrase "protect Taylor Swift" to trend on the social platform. It's not the first time the singer has been subjected to her image being used in explicit AI manipulation, though it's the first with this level of public outrage.

The top 10 deepfake websites hosted about 1,000 videos referencing "Taylor Swift" at the end of 2023, according to a Bloomberg review. Internet users graft her face onto the body of porn performers or offer paying customers the ability to "nudify" victims using AI technology.

Many of these videos are available through a quick Google search, which has been the primary traffic driver to deepfake websites, according to a 2023 Bloomberg report. While Google offers a form letting victims request removal of deepfake content, many complain the process resembles a game of whack-a-mole. At the time of Bloomberg's report last year, a spokesperson for Google said the Alphabet Inc. company designs its search ranking systems to avoid shocking people with unexpected harmful or explicit content they don't want to see.

Almost 500 videos referencing Swift were hosted on the top deepfake site, Mrdeepfakes.com. In December, the site received 12.3 million visits, according to data from Similarweb.

Targeting Women

"This case is horrific and no doubt extremely distressing for Swift, but it's sadly not as groundbreaking as some may think," Ajder said. "The ease of creating this content now is disturbing and affecting women and girls, regardless of where they in the world or their social status."

As of Friday afternoon, explicit AI-generated images of Swift were still on X. A spokesperson for the platform directed Bloomberg to the company's existing statement, which said non-consensual nudity is against its policy and the platform is actively trying to remove such images.

Users of popular AI image-maker Midjourney are already taking advantage of at least one of the fake visuals of Swift to come up with written prompts that can be used to make more explicit pictures with AI, according to requests in a Midjourney Discord channel reviewed by Bloomberg. Midjourney has a feature in which people can upload an existing image to its Discord chat channel - where prompts are input to tell the technology what to create - and it will generate text that can be used to make another image like it via Midjourney or another similar service.

The output of that feature is on a public channel for any of the more than 18 million members of Midjourney's Discord server to see, giving them the equivalent of tips and tricks for fine-tuning AI-generated pornographic imagery. On Friday afternoon, there were nearly 2 million people active on the server.

Midjourney and Discord didn't respond to requests for comment.

Surging Numbers

Amid the AI boom, the number of new pornographic deepfake videos has already surged more than ninefold since 2020, according to research from independent analyst Genevieve Oh. At the end of last year, the top 10 sites offering this content hosted 114,000 videos, among which Swift had already been a common target.

"Whether it's AI or real, it still damages people," said Heather Mahalik Barnhart, a digital forensics expert who develops curriculum for the SANS Institute, a cyber education organization. With the images of Swift, "even though it's fake, imagine the minds of her parents who had to see that - you know, when you see something, you can't make it go away."

Just days before the images of Swift created a firestorm, a deepfake audio message of Biden had been spread in advance of the New Hampshire presidential primary election. Global disinformation experts said that robocall, which sounded like Biden telling voters to skip the primary, was the most alarming deepfaked audio they had heard yet.

There are already concerns that deepfaked audio or video could play a role in upcoming elections, fueled by how fast things spread on social media. The fake Biden message was dialed directly into people's telephones, which provided fewer means for expects to scrutinize the call.

"The New Hampshire primary gives us the first taste of the situation we have to deal with," said Siwei Lyu, a professor at the University at Buffalo who specializes in deepfakes and digital media forensics.

Difficult to Detect

Even on social media, there are currently no reliable detection capabilities, which leaves a frustratingly roundabout process that depends on someone spotting a piece of content and doubting it enough to go to the source to confirm it. That's a presumably more likely scenario for a prominent public figure like Swift or Biden than a local official or private citizen. Even if companies identify and remove these videos, they spread so quickly that often the damage has already been done.

A viral deepfaked video of a victim of the Oct. 7 terrorist attack on Israel, Shani Louk, has amassed more than 7.5 million views on ByteDance Ltd.'s TikTok app since it was posted more than three months ago, even after Bloomberg singled it out for the company in a December story about the platform's struggle to police AI-generated videos of dead victims, including children.

The video-sharing app has banned AI-generated content of private citizens or children, and says "gruesome" or "disturbing" video is also not allowed. As recently as this week, deepfaked videos of dead children voicing the details of abuse and their death were still popping into users' feeds and amassing thousands of views. TikTok removed the videos sent by Bloomberg for comment. As of Friday, dozens of videos and accounts that exclusively post this kind of disturbing fake content are still live.

TikTok has said it's investing in detection technologies and is working to educate users on the dangers of AI-generated content. Other social networks have voiced similar sentiments.

"You can't respond to something, you can't react to something - let alone regulate something - if you can't first detect it," said Nick Clegg, president of public affairs at Facebook and Instagram owner Meta Platforms Inc., at the World Economic Forum in Davos, Switzerland, earlier this month.

Few Laws

There is currently no US federal law banning deepfakes, including those that are pornographic in nature. Some states have implemented laws regarding deepfake pornography, but their application is inconsistent across the country, making it difficult for victims to hold the creators to account. 

Jean-Pierre, the White House press secretary, said Friday that the administration is working with AI companies on unilateral efforts that would watermark generated images to make them easier to identify as fakes. Biden has also appointed a task force to address online harassment and abuse, while the US Justice Department created a hotline for those victimized by image-based sexual abuse.

Congress has began discussing legislative steps to protect celebrities' and artists' voices from AI usage in some cases. Absent from those conversations are any protections for private citizens.

Swift has made no public comment on the issue, including whether she will take legal action. If she chooses to do so, she could be in a position to take on that sort of challenge, said Sam Gregory, executive director of Witness, a nonprofit organization that uses ethical technology to highlight human rights abuses.

"In absence of federal legislation, having a plaintiff like Swift who has the capability and willingness to go after this using all available means to make a point - even if the likelihood of success is low or long-term - is one next step," Gregory said.

(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)



from NDTV News- Special https://ift.tt/e04MHVc
via IFTTT

Comments

Popular posts from this blog

US Teen Who Slapped Teacher In Classroom Faces Kidnapping, Assault Charges

A US high school student, who attacked two teachers in school premises, has been hit indicted on assault and kidnapping charges, according to a report in Fox News. The outlet said that the attacks took place at Parkland High School in Winston-Salem, North Carolina. A video of 17-year-old Aquavis Hickman hitting one of the teachers in the classroom in April had gone viral on social media, leading to a barrage of comments. Hickman is being tried as an adult and his case has been moved from a juvenile court to a superior court. Watch the video: NEW: North Carolina high school student who went viral for hitting his teacher has been smacked with felony charges & is being charged as an adult. This is how it's done. 17-year-old Aquavis Hickman has been indicted on assault and kidnapping charges for two separate… pic.twitter.com/JOsO0bFiKX — Collin Rugg (@CollinRugg) May 5, 2024 "A grand jury was convened last week, last Monday, comprised of members of this community and

Pepsi Unveils Futuristic 'Smart Cans' - Find Details Inside

Technology and innovation go hand-in-hand, and today, we can see its best usage in every possible sector, including the food and beverage industry. From packaged food firms to restaurants and fast food chains - we see people experimenting with advanced technologies for seamless workflow and better user experience. One such recent instance is Pepsi's latest innovation - the Smart Can. The brand took to the platform of the Cannes Lions International Festival of Creativity for the preview of the new experiment. Read on. Also Read:  Pepsi Introduces Colachup - Its A Ketchup! Pepsi Smart Can Features: What Is So Special About Pepsi Smart Cans? Mauro Porcini, PepsiCo's senior vice president and chief design officer, took to social media to explain more about the brand's futuristic experiment. "Let me introduce you to our new PEPSI SMART CAN, a CAN-vas for creativity," he states, adding that this will unveil new ways of storytelling, and accessing new experiences, esp

Koffee With Karan 8: Kajol's Memory Lapse About Rani Mukerji's K3G Cameo

After five fun-filled episodes of Koffee With Karan 8 , Karan Johar has “hit the rewind button hard” (not our words). The filmmaker's next guests on the show are his “first leading ladies” — Rani Mukerji and Kajol . From Kajol “staging a walkout” to Rani planning to “expose” KJo, the promo of the upcoming episode is all things amazing. Above all, what caught our attention was that Kajol didn't know “Rani had a special appearance” in Kabhi Khushi Kabhie Gham . The actress has confessed it on the infamous koffee couch. It happened when Karan Johar, during a buzzer round, asked “One Kajol film in which Rani had a special appearance?” Well, Kajol was super fast with the buzzer but she didn't know the answer. Then, Karan said, “How are you so stupid? The answer is Kabhi Khushi Kabhie Gham .” A surprised Kajol asked, “Rani had a special appearance in the film?” Can't miss Rani and Karan Johar's reaction. Too good, Kajol, too good. Kabhi Khushi Kabhie Gham was Karan