Home Technology Is ChatGPT a ‘virus released into the wild’? • businessroundups.org

Is ChatGPT a ‘virus released into the wild’? • businessroundups.org

by Ana Lopez
0 comment

More than three years ago, this editor spoke to Sam Altman for a small event in San Francisco, shortly after stepping down from his role as president of Y Combinator to become CEO of the AI ​​company he co-founded with Elon Musk in 2015. and others, Open AI.

At the time, Altman described OpenAI’s potential in language that sounded bizarre to some. For example, Altman said the odds with artificial general intelligence — machine intelligence that can solve problems just as well as a human — are so staggeringly high that if OpenAI succeeded in cracking it, the outfit “might be the cone of light of all future value in the universe.” He said the company “shouldn’t release an investigation” because it was so powerful Asked if OpenAI was guilty of fear mongering – Elon Musk, a co-founder of the outfit, has repeatedly called on all organizations developing AI to regulated — Altman talked about the dangers of not thinking about ‘social consequences’ when ‘you build something on an exponential curve’.

The audience laughed at several points of the conversation, unsure how seriously to take Altman. However, no one is smiling now. Although machines are not yet as smart as humans, the technology that OpenAI has since released to the world is getting so close that some critics fear it could be our downfall (and more advanced technology is reportedly on the way).

Indeed, although heavy users say so not very smartthe ChatGPT model that OpenAI made available to the general public last week is thus able to answer the questionstatements such as a person professionals in a range of industries try to process the implications. For example, educators wonder how to distinguish original writing from the algorithmically generated essays they are bound to receive—and dodge that anti-plagiarism software.

Paul Kedrosky is not necessarily an educator. He is an economist, venture capitalist, and MIT fellow who calls himself a “frustrated normal person with a penchant for thinking about risk and unintended consequences in complex systems.” But he’s one of those suddenly worried about our collective future, tweet yesterday: “[S]hame to OpenAI for launching this atomic bomb without restrictions in an unprepared society. Wrote Kedrosky: “It is clear to me that ChatGPT (and its ilk) should be withdrawn immediately. And, if it is ever reintroduced, only with strict restrictions.

We spoke to him yesterday about some of his concerns, and why he thinks OpenAI is leading to what he says is the “most disruptive change the U.S. economy has seen in 100 years,” and not in a good way.

Our chat has been edited for length and clarity.

TC: ChatGPT came out last Wednesday. What caused your reaction on Twitter?

PK: I’ve played around with these conversational UIs and AI services in the past and this is clearly a big step forward. And what particularly bothers me here is the casual brutality of it, with huge implications for a wide variety of activities. It’s not just the obvious ones, like high school essay writing, but in pretty much every domain where grammar exists — [meaning] an organized way to express yourself. That could be software engineering, high school essays, legal papers. They are all easily eaten by this voracious beast and regurgitate them without compensation for whatever was used to train it.

I heard from a colleague at UCLA who told me they have no idea what to do with essays at the end of the current semester where they get hundreds per course and thousands per department because they have no idea what fake is anymore and what is fake. not. So to do this so casually — as someone said to me earlier today — brings to mind the so-called [ethical] white hat hacker who finds a bug in a widely used product and informs the developer before the general public knows so that the developer can patch their product and we don’t have mass destruction and power grids going down. This is the opposite, where a virus has been released into the wild without concern for the consequences.

It feels like it could eat the world.

Some might say, “Well, did you feel the same way when automation came into auto plants and auto workers became unemployed?” Because this is a kind of broader phenomenon.’ But this is very different. These specific learning technologies self-catalyze; they learn from the requests. So while robots in a factory were disruptive and had incredible economic impacts on the people who worked there, they didn’t turn around then and started taking in everything that entered the factory, moving sector by sector, when not only is what we can expect but what you should expect.

Musk partially abandoned OpenAI disagreements about the development of the company, he said in 2019, and he has long talked about AI as an existential threat. But people complained that he didn’t know what he was talking about. Now we are faced with this powerful technology and it is not clear who steps in to deal with it.

I think it will start in a number of places at once, most of which will look really awkward, and people will [then] grin, because that’s what technologists do. But a shame, because we got into this by creating something with such consequence. So the same way the FTC required people to run blogs years ago [make clear they] have affiliate links and monetize them, I think on a trivial level people will be forced to reveal that ‘we wrote none of this’. This is all machine generated.’

I also think we’re going to see new energy for the pending lawsuit against Microsoft and OpenAI for copyright infringement in the context of our in-training, machine learning algorithms. I think there will be a broader DMCA issue here related to this service.

And I think there’s the potential for one [massive] litigation and ultimately settlement regarding the effects of the services, which, you know, will probably take too long and not help enough people, but I don’t see how we can’t be in a [this place] regarding these technologies.

What is thinking at MIT?

Andy McAfee and his group there are more optimistic and have a more orthodox view that every time we see disruption, other opportunities are created, people are mobile, they move from place to place and profession to profession, and we shouldn’t be hiding that that we think this particular evolution of technology is the one around which we cannot mutate and migrate. And I think that’s broadly true.

But the lesson of the last five years in particular is that these changes can take a long time. For example, free trade is one of those incredibly disruptive, economy-wide experiences, and we all said to ourselves as economists watching this that the economy will adjust and people in general will benefit from lower prices. What no one expected was that someone would organize all the angry people and elect Donald Trump. So there’s the idea that we can anticipate and predict what the consequences will be, but [we can’t].

You talked about essay writing in high school and college. One of our kids has already asked – in theory! — if it would be plagiarism to use ChatGPT to write a paper.

The purpose of writing an essay is to prove you can think, so this short circuits the process and defeats the purpose. Again, in terms of consequences and externalities, if we can’t give people homework assignments because we don’t remember whether they’re cheating or not, it means everything has to be done in the classroom and supervised. There can’t be anything we take home. More things need to be done verbally, and what does that mean? It means that school has just become much more expensive, much more artisanal, much smaller and right at the time when we are trying to do the opposite. The consequences for higher education are disastrous when it comes to actually providing a service.

What do you think about the idea of ​​a universal basic income, or allowing everyone to participate in the profits from AI?

I am much less of a supporter than I was before COVID. The reason is that COVID was, in a way, an experiment with universal basic income. We paid people to stay home, and they came with QAnon. So I’m really nervous about what happens when people don’t have to get in a car, drive somewhere, do a job they hate and come back home, because the devil finds work for idle hands, and there will be a lot of useless hands and a lot of devilish work.


You may also like

About Us

Latest Articles