Home Technology The week in AI: generative AI is spamming the web

The week in AI: generative AI is spamming the web

by Ana Lopez
0 comment

Keeping up in an industry that evolves as fast as AI is quite a task. So until an AI can do it for you, here’s a handy roundup of recent stories in the world of machine learning, along with notable research and experiments that we’ve not just covered.

This week, SpeedyBrand, a company that uses generative AI to create SEO-optimized content, emerged from stealth with the backing of Y Combinator. It has not yet attracted much funding ($2.5 million) and its customer base is relatively small (about 50 brands). But it got me thinking about how generative AI is starting to change the makeup of the web.

As James Vincent of The Verge recently wrote part, generative AI models make it cheaper and easier to generate lower quality content. Newsguard, a company that provides tools for vetting news sources, has exposed hundreds of ad-supported sites with generic-sounding misinformation names created with generative AI.

It causes a problem for advertisers. Many of the sites spotlighted by Newsguard appear to be built solely to exploit programmatic advertising or the automated systems for placing ads on pages. In its report, Newsguard found nearly 400 copies of advertisements from 141 major brands appearing on 55 of the junk news sites.

It’s not just advertisers who should be concerned. As Kyle Barr from Gizmodo points out, it may only take one AI-generated article to generate mountains of engagement. And even if each AI-generated article brings in just a few dollars, that’s less than the cost of generating the text in the first place – and potential ad money not sent to legitimate sites.

So what’s the solution? Is there a? It’s a few questions that keep me awake at night more and more. Barr suggests it’s up to search engines and ad platforms to tighten their grip and punish the bad guys who are embracing generative AI. But given how fast the field moves – and the infinitely scalable nature of generative AI – I’m not convinced they can keep up.

Of course, spammy content is not a new phenomenon, and there have been waves before. The web has been modified. What’s different this time around is that the barrier to entry is dramatically low – both in terms of cost and time to invest.

Vincent strikes an optimistic tone, implying that as the web is eventually overrun with AI junk, it could spur the development of better-funded platforms. I’m not so sure. What there is no doubt about, however, is that we are at a turning point and the decisions now being made around generative AI and its outcomes will impact the function of the web for some time to come.

Here are other AI stories from the past few days:

OpenAI officially launches GPT-4: OpenAI this week announced the general availability of GPT-4, its latest text-generating model, through its paid API. GPT-4 can generate text (including code) and accept image and text input – an improvement over GPT-3.5, its predecessor, which only accepted text – and performs at “human level” on several professional and academic benchmarks. But it’s not perfect, as we noted in our previous coverage. (Meanwhile, the adoption of ChatGPT reportedly downstairsbut we’ll see.)

Getting “Super-intelligent” AI under control: In other OpenAI news, the company is forming a new team led by Ilya Sutskever, the chief scientist and one of OpenAI’s co-founders, to develop ways to direct and control “super-intelligent” AI systems.

Anti-bias law for NYC: After months of delays, New York City this week began enforcing a law requiring employers who use algorithms to recruit, hire, or promote employees to independently audit those algorithms and make the results public.

Valve silently greenlights AI-generated games: Valve issued a rare statement after claims it rejected games with AI-generated assets from the Steam game store. The notorious tight-lipped developer said his policy was evolving and not taking a stand against AI.

Humane reveals the Ai Pin: Humane, the startup launched by ex-Apple design and engineering duo Imran Chaudhri and Bethany Bongiorno, this week revealed details about its first product: the Ai Pin. As it turns out, Humane’s product is a wearable gadget with a projected screen and AI-powered features – like a futuristic smartphone, but in a completely different form factor.

Warnings about EU AI regulations: Major tech founders, CEOs, VCs and industry giants across Europe signed an open letter to the European Commission this week, warning that Europe could miss out on the generative AI revolution if the EU passes laws that nip innovation in the bud.

Deepfake scam doing the rounds: Checking out this clip from British consumer finance champ Martin Lewis who apparently has a shilling shilling investment opportunity backed by Elon Musk. Seems normal, right? Not exactly. It’s an AI-generated deepfake – and possibly a glimpse of the AI-generated misery soon appearing on our screens.

AI Powered Sex Toys: loves — perhaps best known for its remote-controlled sex toys — announced its ChatGPT Pleasure Companion this week. Launched in beta in the company’s remote control app, the “Advanced Lovense ChatGPT Pleasure Companion” invites you to enjoy juicy and erotic stories that the Companion creates based on your selected topic.

Other machine learning

Our round of research starts with two very different projects from ETH Zurich. The first one aiEndoscopic, a smart intubation spinoff. Intubation is necessary for a patient’s survival in many circumstances, but it is a cumbersome manual procedure usually performed by specialists. The intuBot uses computer vision to recognize and respond to a live feed from the mouth and throat, guiding and correcting the position of the endoscope. This allows people to safely intubate when needed instead of waiting for the specialist, potentially saving lives.

Here they explain it in a bit more detail:

In a completely different domain, researchers at ETH Zurich also contributed second-hand to a Pixar film by pioneering the technology required to do so animate smoke and fire without falling prey to the fractal complexity of fluid dynamics. Their approach was noticed and built by Disney and Pixar for the movie Elemental. Interestingly, it’s not so much a simulation solution as a style transfer solution – a clever and apparently quite valuable shortcut. (Image at the top is of this.)

AI in nature is always interesting, but AI in nature applied to archaeology is even more so. Research led by Yamagata University focused on identifying new Nasca lines – the huge “geoglyphs” in Peru. You’d think that if they’re visible from orbit, they’d be pretty obvious — but erosion and tree cover over the millennia since these mysterious formations formed means there’s an unknown number hiding just out of sight. After being trained on aerial images of known and obscured geoglyphs, a deep learning model was unleashed on other views, and amazingly discovered at least four new ones, as you can see below. Pretty exciting!

Four Nasca geoglyphs recently discovered by an AI agent.

In a more directly relevant sense, AI-adjacent technology is always finding new work in detecting and predicting natural disasters. are Stanford engineers collect data to train future wildfire prediction models with by running simulations of heated air above a canopy in a 9 meter high water tank. If we’re to model the physics of flames and embers traveling beyond the confines of a wildfire, we need to better understand them, and this team is doing what they can to approach that.

At UCLA, they are researching how to predict landslides, which become more common as fires and other environmental factors change. But while AI has already been used to predict them with some success, it “doesn’t show its work,” meaning a prediction doesn’t explain whether it’s because of erosion, or a shift in groundwater levels, or tectonic activity. A new approach to “superimposable neural network”. have the layers of the network use different data but run in parallel rather than all together, making the output a little more specific where variables lead to increased risk. It is also much more efficient.

Google faces an interesting challenge: How do you ensure that a machine learning system learns from dangerous knowledge, but does not spread it? For example, if his training set contains the recipe for napalm, you don’t want him to repeat it – but to know not to repeat it, he needs to know what not to repeat. A paradox! So the tech giant is looking for a method of “machine unlearning” which allows this kind of balancing act to take place safely and reliably.

If you want to learn more about why people seem to trust AI models for no good reason, look no further than this Science editorial by Celeste Kidd (UC Berkeley) and Abeba Birhane (Mozilla). It delves into the psychological underpinnings of trust and authority and shows how today’s AI agents actually use them as springboards to escalate their own worth. It’s a really interesting article if you want to sound smart this weekend.

While we often hear about the infamous mechanical Turk fake chess machine, that charade inspired people to create what it claimed to be. IEEE Spectrum has a fascinating story about the Spanish physicist and engineer Torres Quevedo, who created a real mechanical chess player. The options were limited, but that’s how you know it was real. Some even claim that his chess machine was the first ‘computer game’. Something to think about.


You may also like

About Us

Latest Articles