Salta al contenuto principale


It’s OK to call it Artificial Intelligence: I wrote about how people really love objecting to the term "AI" to describe LLMs and suchlike because those things aren't actually "intelligent" - but the term AI has been used to describe exactly this kind of research since 1955, and arguing otherwise at this point isn't a helpful contribution to the discussion.

simonwillison.net/2024/Jan/7/c…

Lapo Luchini reshared this.

in reply to Simon Willison

Hearty agreement - though we now have the challenge of how to manage the boom of popular awareness, balanced with their understandably non-technical definition of intelligence. I'd argue ;) that the "arguing otherwise" is often about exactly that vocabulary mismatch.
in reply to Simon Willison

Short version: "I’m going to embrace the term Artificial Intelligence and trust my readers to understand what I mean without assuming I’m talking about Skynet."
in reply to Simon Willison

What makes this hard currently is that many of the loudest advocates *are explicitly* talking about skynet (or "digital god" or whatever). And it seems like they're using this history of the term as cover with a general audience.
in reply to Simon Willison

the term "AI" deeply misleads laypeople into thinking sentient minds are at play, leading to all kinds of misuse/harm. I dont have to list links to all the damage "AI" has done so far due to people putting it in charge of things since "it's intelligent".

going to keep using technical terms like "machine learning" so that all the non-tech people I talk to understand a tech person like me does not consider this stuff to be "intelligent" in any way we usually define that term for humans

in reply to mike bayer

less harm was done in 1955, 1960, 1970 etc. because we didn't have machines that were so singularly focused on pretending to be (confident, authoritative) humans at such massive scales, there was little chance of misunderstanding back then. now these machines have "I hope you misunderstand what I do" at their core
in reply to mike bayer

@zzzeek That's a very strong argument. I'm going to add a longer section about science fiction to my post, because that's the reason I held off on the term for so long too
in reply to Simon Willison

@zzzeek Added that section here simonwillison.net/2024/Jan/7/c…
in reply to Simon Willison

@zzzeek this exchange is intelligence well demonstrated. thank you both for that.
in reply to mike bayer

just as the center of my assertion "I hope you misunderstand what I do", I would use the "AI Safety" letter as the prime example, of billionaires and billionaire-adjacent types declaring that this "AI" is so, so close to total sentience that governments *must* stop everyone (except us! who should be gatekeepers) from developing this *so very dangerous and powerful!* technology any further

lots of non-tech ppl signed onto that thing and it was quite alarming

in reply to mike bayer

@zzzeek urgh, yeah the thing where people are leaning into the science fiction definition to help promote the technology is really upsetting
in reply to Simon Willison

Feels a lot like how the government long ago adopted “cyber” to encompass any kind of computer/network discussions. Everybody in industry hates it but is forced to play along because it opens doors (and wallets) of those outside the industry.
in reply to Simon Willison

I added an extra section to my post proving a better version of the argument as to why we shouldn't call it AI simonwillison.net/2024/Jan/7/c…
in reply to Simon Willison

Calling LLMs “AI” is a bald faced lie.

The promoters try to excuse it by saying they’re using a different definition of intelligence now. But they know nobody else is using this novel definition.

They are getting away with it because we live in the Era of Shamelessness.

in reply to Simon Willison

I do tend to agree with your argument. It doesn't matter that much what we call it at this point - it's a clear umbrella term for the majority of the population. You can get more granular as discussion gets more specific and academic. I don't think my mom is going to understand the difference between AGI and a multi-modal large language model (MMLLM?) - it's absurd to expect otherwise. Meanwhile, these systems are becoming part of everyone's life - these nuances are meaningless.
in reply to Ganonmaster

Focusing on the semantics is a distraction from the real tangible impact that these systems are having on our daily lives. AI is causing measurable harm as we speak, and quarreling about semantics is a stupid, meaningless distraction from the real world impact that these systems are having. (power consumption/global warming, inaccurate/invalid results, cheap/slave labor used for data labeling, rights issues, privacy violations, etc.)
in reply to Ganonmaster

@ganonmaster 100% this - my concern is that anyone who says "You know it's not even AI?" is wasting an opportunity to have a more useful conversation
in reply to Simon Willison

And another section trying to offer a useful way forward: Let’s tell people it’s “not AGI” instead

simonwillison.net/2024/Jan/7/c…

in reply to Simon Willison

... OK, I'm cutting myself off now - I added one last section, "Miscellaneous additional thoughts", with further thinking inspired by the conversation here: simonwillison.net/2024/Jan/7/c… - plus a closing quote from @glyph
in reply to Simon Willison

@glyph This is an interesting piece, Simon - thank you for writing it.

I wonder if you're not somewhat undermining your own argument somewhat.

There is no reason at all why the interface to an LLM needs to be a chat interface "like you're talking to a human". That is a specific choice - and we have known for decades that humans will attach undue significance to something that "talks like a person" - all the way back to Eliza. 1/

in reply to Ben Evans

@kittylyst @glyph I'm more than happy to undermine my own argument on this one, I don't particularly strong opinion here other than "I don't think it's particularly useful to be pedantic about the I in AI".

100% agree that the chat interface is a big part of it, and also something which isn't necessarily the best UI for working with these tools, see also: simonwillison.net/2023/Oct/17/…

in reply to Ben Evans

@glyph Therefore, this is an explicit design choice on the part of the product designers from these companies - and I struggle to see any reason for it other than to deliberately exploit the blurring of the distinction between "AI" & AGI - for the purpose of confusing non-technical investors and thus to juice valuations - regardless of the consequences. 2/
in reply to Ben Evans

@kittylyst @glyph The thing I've found particularly upsetting here is the way ChatGPT etc talk in the first person - they even offer their own opinions on things some of the time! It's incredibly misleading.

Likewise the thing where people ask them questions about their own capabilities, which they then convincingly answer despite not having accurate information about "themselves" simonwillison.net/2023/Mar/22/…

in reply to Simon Willison

@glyph Absoluely - this is what I'm getting at when I say that these are explicit product design decisions without a convincing justification other than to cynically juice valuations.
in reply to Simon Willison

Added this just now, a thing I learned from social.juanlu.space/@astrojuan… which gave me an excuse to link to 99percentinvisible.org/episode… (I'll never skip an excuse to link to that)


Counterargument: daniel.haxx.se/blog/2024/01/02…

"AI" as a term, like many other things, was a male ego thing. McCarthy: "I wished to avoid having either to accept Norbert (not Robert) Wiener as a guru or having to argue with him." en.m.wikipedia.org/wiki/Histor…

"AI" is the biggest terminology stretch in the history of computing, and using it is "OK" only because everybody else is doing it, but that's a weak excuse.


Questa voce è stata modificata (11 mesi fa)
in reply to Simon Willison

Casual thought: maybe a good term for "artificial intelligence" that's actually intelligent... is intelligence!
in reply to Simon Willison

I’ve yet to hear rigorous definitions of either artificial or intelligence.
in reply to Simon Willison

A problem I see is that the colloquial use of “intelligence” implies conscious agency, and brings with a whole host of assumptions that are not warranted with artificial system, and that can cause huge problems.
in reply to Simon Willison

we began debating this on the Safe Network forum and it quickly became obvious that it is incredibly hard to define. There are so many ways to look at phenomena that could be called intelligence, so many timescales and scopes.

Really the first step is to clearly specify your terms. Anything ambiguous is pretty useless.

in reply to Simon Willison

this is well covered in the older Norvig books (I just looked because I am sitting next to them). PAIP has a very humorous chapter on “GPS" the general problem solver, and AI: A Modern Approach covers the history very well in Section 1.3 (~page 17), and mentions escape from cybernetics, but not the personal stuff.

(I have a bunch of these books, as I would buy anything I could find that would tell me “what computers can do”, and the Internet really wasn't any good yet)

Questa voce è stata modificata (11 mesi fa)
in reply to Simon Willison

it's not AI at all. Don't let the push marketing sons of bitches claim the memetic space. "Auto complete at scale" ain't intelligent
in reply to Kofi Loves Efia

@Seruko I 100% agree that autocomplete at scale isn't intelligent, but I still think "Artificial Intelligence" is an OK term for this field of research, especially since we've been using it to describe non-intelligent artificial systems since the 1950s

I like "AGI" as the term to use for what autocomplete-at-scale definitely isn't

in reply to Simon Willison

Doesn’t this just re-establish the same problem? AGI isn’t a well-known term, so you’re still left defining the terms of the debate you’re hoping to avoid in order to avoid misleading the reader.
in reply to Evan Hensleigh

@futuraprime maybe!

My hunch is that it's easier to teach people that new term than convive them to reject a term that everyone else in society is already using

in reply to Simon Willison

Yeah, that’s fair. Certainly everyone equates LLMs with AI.

The other part of my reluctance is that lots of people are trying to broaden the term to capitalise on it—I’ve seen “AI” applied to all sorts of unsupervised learning tasks to make them sound fancier. The gulf between someone’s random forest classifier and GPT4 is so huge it makes me want to be more specific.

in reply to Evan Hensleigh

@futuraprime I was tasked with delivering a recommendation system a while ago, and the product owners REALLY wanted it to use machine learning and AI... I eventually realized that what they wanted was "an algorithm", so I got something pretty decent working with a pretty dumb Elasticsearch query plus a little bit of SQL
in reply to Simon Willison

The problem is people taking it literally. Yeah, AI is a field of computer science. But it's being marketed as a product. And it's being hyped as if it's now an achieved reality, instead of just software that mimics human conversation, art, etc.
in reply to Simon Willison

Your readers are probably fine, but the problem is this is the first time this has escaped into the real world. It is being put in front of muggles who have been trained on sci-fi and have wildly unrealistic expectations. We know LLMs are glorified photocopiers, but normal people who I've spoken with genuinely expect the "intelligence" bit to mean that answers come from human-like knowledge and thought. The danger is the AI label means they trust what LLMs generate without question.
in reply to Richard Terry

@radiac I do agree with that, but I'm not sure that's the battle worth fighting right now - my concern is that if we start the conversation with "you know it shouldn't really be called AI, right?" we've already put ourselves at a disadvantage with respect to helping people understand what these things are and what they can reasonably be used to do
in reply to Simon Willison

True, it's not like we can change the narrative now anyway - it's intentional, it's billionaire marketing. Trying to rebrand as "not AGI" is not going to work, the public have never heard of AGI and won't be interested in the difference.

It's trolling vs abuse, or hacker vs cracker again - if I say in the real world "I enjoy trolling" I lose friends, or "I'm a hacker" they imagine me skating around train stations looking for landlines. Difference is, misnomers like that don't risk harm.

in reply to Simon Willison

maybe, but because some science people working on it called it that doesnt mean we have to accept the word.
the more general term hides the more specific and nuanced and more informative details, also once introduced into the mainstream vocabulary it might clash with other mainstream meaning and it is easier for a small group to change their wording than for a large group.

i generally think scientists should strive to simplify their language, but some actually hide behind it.

in reply to 𝓼𝓮𝓻𝓪𝓹𝓪𝓽𝓱【ツ】☮(📍🇬🇧)

@serapath I think refusing to accept the word at this point actively hurts our ability to have important conversations about it

Is there an argument that refusing to use the word Artificial Intelligence can have a positive overall impact on conversations and understanding? I'm open to hearing one!

in reply to Simon Willison

i do think AI gives way to much credibility to it. People saw and read scifi movies/books and believe chat gpt & co. despite all the confident bullshit it shares.

also, image recognition is different from a language learning model, so what are we even talking about when talking about AI?

it is way to broad to make useful statements, other than what we all saw in scifi movies at some point imho

in reply to 𝓼𝓮𝓻𝓪𝓹𝓪𝓽𝓱【ツ】☮(📍🇬🇧)

@serapath@gamedev.place That's the exact position I'm arguing against

Yes, it's not "intelligent" like in science fiction - but we need to educate people that science fiction isn't real, not throw away a whole academic discipline and pick a different word!

Image recognition is part of AI too

Questa voce è stata modificata (11 mesi fa)
in reply to Simon Willison

hm, yeah no.
i disagree.
mainstream people have as much rights to their words as scientists, but mainstream is in the majority and AI will also continue to be abused by marketing to make outrageous claims.
i dont think AI helps anyone and i will continue to ignore anyone talking about AI
in reply to Simon Willison

The word AI does not help anyone with anything, because you also cant tell which version or part i even mean when saying that, hence it is just confusing. 😁

i just meant the term

in reply to 𝓼𝓮𝓻𝓪𝓹𝓪𝓽𝓱【ツ】☮(📍🇬🇧)

@serapath Yeah, it’s polysemic. It means x to researchers, but y to laypeople who only know of ChatGPT. I honestly haven’t seen/heard anyone IRL immediately jumping into a conversation with “but it’s not actually intelligent!!”. What I have experienced is getting partway into a conversation and having to say it - because it has become obvious the other person DOES think “Intelligence” is human-like decision making.
in reply to Jim Gardner

@jimgar @serapath that observation that the term AI is polysemic just expanded my understanding of the core issue here substantially! Thanks for that
in reply to Simon Willison

I was guilty of this just this morning, you've changed my mind. Thank you!
in reply to Simon Willison

"AI" isn't wrong, but I think it is most helpful to use the most specific term that applies. So if you are talking about issues with LLMs in particular, better to say LLMs.
in reply to Joe

@not2b That's what I've been doing, but I think it's actually hurting my ability to communicate. I have to start every blog entry with "LLMs, Large Language Models, the technology behind ChatGPT and Bard" - and I'm not sure that's helping people understand my material better!
@Joe
in reply to Simon Willison

@not2b I don't know - I actually think you are bringing nuance to the discussion (at the expense of grabbing a bit more attention by using AI, which by now is incredibly vague in general discourse) with a statement like "LLMs, a type of statistical model...", which is sorely needed.
Also, I still try to use SALAMI whenever I can ;).
@Joe
in reply to Simon Willison

I so want to agree with you. What's making me a ReplyGuy is that people outside the field put far too much weight on what AI means. Too many don't understand how narrow LLMs are, spinning doomsday scenarios far too easily. (but they ARE powerful!) I don't like to use the term just to back these people off the ledge
in reply to Scott Jenson

@scottjenson Yeah, that's exactly why I was resistant to the term too - the "general public" (for want of a better term) knows what AI is, and it's Skynet / The Matrix / Data from Star Trek / Jarvis / Ultron

I decided to give the audience of my writing the benefit of the doubt that they wouldn't be confused by science fiction

in reply to Simon Willison

"Artificial intelligence has been used incorrectly since 1955" is not a convincing argument to me (and means our predecessors are as much to blame for misleading the general public as contemporary hucksters claiming ChatGPT is going to cause human extinction).
in reply to Chip Warden

@lgw4 I don't think they were wrong to coin a term in 1955 with a perfectly reasonable definition, then consistently apply that definition for nearly 70 years.

It's not their fault that science fiction redefined it from under them!

in reply to Simon Willison

Machines with intelligence similar to (or better than) that of humans (that is, the current popular concept of artificial intelligence) has been present in science fiction since the 19th century. Dystopian (and utopian) fantasies of humans subjugated (or assisted) by these machine intelligences have been science fiction tropes continuously since then. I would wager that John McCarthy was aware of this fact. No one "redefined it from under them."
in reply to Chip Warden

@lgw4 that's not an argument I'd heard before! I know science fiction had AI all the way back to Erewhon en.m.wikipedia.org/wiki/Erewho… but I was under the impression that the term itself was first used by McCarthy
in reply to Simon Willison

I’ve been thinking about this too, but on a slightly different line. It’s not about science fiction, it’s that we so strongly tie language with intelligence. The Turing test is based on this connection. We measure children’s development in language milestones, and look for signs of language in animals to assess their intelligence. It goes back a long way—“dumb” in English has meant both “unable to speak” and “unintelligent” for 800 years. The confusion is reflexive and deep-seated.
Questa voce è stata modificata (11 mesi fa)
in reply to Simon Willison

personally, i refuse to call it artificial intelligence until it is REAL intelligence.
Unknown parent

mastodon - Collegamento all'originale
Jeff Atwood
I mean, I dunno, maybe you built something significant? What is it? Can you share it with us? Or are you another talking head? Feel free to enlighten us, oh master of terminology! What did you build that shaped the world?
Questa voce è stata modificata (11 mesi fa)
in reply to Jeff Atwood

@codinghorror I've built open source stuff, but hopefully my credibility in this particular space comes from having spent the last year working hard to help people understand what's going on - simonwillison.net/2023/Aug/3/w… and suchlike
in reply to Simon Willison

to me, the relationship between "AI" and "LLM/etc." feels somewhat akin to the relationship between "speed" and "velocity" in common usage.

It's not 1:1 or anything, but "AI" feels like it's in word-gruel territory more often. And it's probably fine if colloquial usage doesn't really care about how mushy that usage is.

Questa voce è stata modificata (11 mesi fa)
Unknown parent

mastodon - Collegamento all'originale
Jeff Atwood
@richardsheridan these models have zero understanding of what they are “talking” about, it’s just scraped text statistical inference. Basically a fancy “summarize these 100 articles using the most common words in each article” feature. Which, to be fair, is more useful than cryptocurrency. But that is an absurdly low bar. So yeah, zero “intelligence”. I’ll die on this hill with gusto.
in reply to Jeff Atwood

@codinghorror @richardsheridan I agree with you! These things are spicy autocomplete, they're not "artificial intelligence" in the science fiction definition

My argument here is that AI should mean what it's meant in academia since the 1950s, and we should reclaim it from science fiction

in reply to Jeff Atwood

@richardsheridan I mean, is it a movie reference? Lame. Bring it. I’m ready for you. Let’s go. Let’s see who is on the right side of history here. Cmon. Let’s do this.
Questa voce è stata modificata (11 mesi fa)
in reply to Jeff Atwood

@richardsheridan sorry but that’s the truth. If the truth hurts, I’m sorry. Deal with it dot gif
Questa voce è stata modificata (11 mesi fa)
in reply to Jeff Atwood

@richardsheridan “I don’t think refusing to use the term AI is an effective way for us to do that.” hard disagree and espousing this viewpoint sets us back in computer science as a whole. You are actively causing harm to the field.
Questa voce è stata modificata (11 mesi fa)
in reply to Jeff Atwood

@codinghorror @richardsheridan I'm ready to be convinced of that - that calling it "AI" really does cause harm and that there are more useful terms we can be using - but you need to make the argument
in reply to Simon Willison

one argument that you’re not addressing here is that it dates anything you are writing, in a way that makes it hard to understand without first understanding its contemporaneous terminology. Our current view of AI as an actual *technology*—statistical machine-learning techniques, as opposed to just the chatbot UI paradigm—is quite new and quite *at odds with* previous understanding of the term (like, say, expert systems). It may be at odds with future understandings as well.
in reply to Glyph

Also, not for nothing but you are giving the lay public _way_ too much credit when it comes to understanding the limitations of LLMs and PIGs. Numerous people are doing additional jail time because even highly-educated, nationally-renowned *lawyers* cannot wrap their heads around this. The term very definitely obscures more than it reveals, and the “well, actually” pedantic conversation about it’s inappropriateness *does* drive deeper understanding of it.
in reply to Glyph

@glyph I feel like “AI” has a very precise layman’s definition and a very vague practitioner’s definition. To a layman AI means AGI, “a computer that can think like a person.” To a practitioner AI means…? “Statistical ML ish?” “LLMs and PIGs?” “I get more funding if I call this AI?” The public has a very precise definition! That’s so rare. We shouldn’t water it down and say “oh that’s actually A~G~I” for no reason.
in reply to Carlana Johnson ​

@carlana @glyph I love that idea that layman have more confidence over the definition than practitioners do!
in reply to Simon Willison

@carlana this strikes closer to the heart of my objection. A lot of insiders—not practitioners as such, but marketers & executives—use "AI" as the label not in spite of its confusion with the layperson's definition, but *because* of it. Investors who vaguely associate it with machine-god hegemony assume that it will be very profitable. Users assume it will solve their problems. It's a term whose primary purpose has become deceptive.

Simon Willison reshared this.

in reply to Glyph

@carlana At the same time, a lot of the deception is unintentional. When you exist in a sector of the industry that the public knows as "AI", that the media calls "AI", that industry publications refer to as "AI", that *other* products identify as "AI", going out on a limb and trying to build a brand identity around pedantic hairsplitting around "LLMs" and "machine learning" is a massive uphill battle which you are disincentivized at every possible turn to avoid.
in reply to Glyph

@glyph @carlana thats exactly it - I've been half-heartedly fighting the LLM hairsplitting fight for most of the last year and I got tired of it - it didn't feel like it was gaining anything meaningful
in reply to Simon Willison

@carlana personally I am trying to Get Into It over the terminology less often, but I will still stick to terms like "LLMs", "chatbots", and "PIGs" in my own writing. Not least of which because the tech behind PIGs/PVGs, LLMs, and ML classifiers are actually all pretty different, despite having some similar elements
in reply to Glyph

@glyph @carlana what are PIGs and PVGs? I tried digging around and couldn't figure those ones out!
in reply to Simon Willison

@glyph @carlana I think it's those battles that can't be won, but will be lost even more badly if we stop fighting altogether.
in reply to Simon Willison

The less you know the more confident you are. Just ask an LLM.

I intentionally avoid the term AI and advise other technically minded folks to do the same because it is a purely Marketing term. It will never have a meaningful definition.

Everything I've ever worked on to automate tasks with computers in the past 30 years would be called AI today by a Marketing Department despite none of it involving ML.

Their definition is "this term attracts attention and money", oriented around their goal. The lay person hearing it has a definition of "hype buzzword bingo score for Product Name". It doesn't communicate anything.

Elide the term AI from any context in which it gets used to describe something and it should still be just as meaningful. If not, nothing was being said.

Be right back. I'm gonna go hit Tab in my command line so the shell's AI can do what I want for me. 😛

@carlana @glyph

in reply to Simon Willison

@codinghorror @richardsheridan
Has "AI" ever carried connotations of actual intelligence in the CS field? "AI" used to mean expert systems, logical inference, playing chess, "fuzzy logic", and so on and so on - none of which had any more to do with actual intelligence than deep neural networks.
in reply to Dаn̈ıel Раršlow 🥧

@pieist yes, absolutely - I think the thing that's not OK here is fiercely arguing that people who call LLMs AI shouldn't do that to the point of derailing more useful conversations
in reply to Janne Moren

right: that's my point: AI is a term we have used since the 1950s for technology that "isn't actually intelligent", so there's plenty of precedent for using it that way

That's why we have the term "AGI"

Questa voce è stata modificata (11 mesi fa)
in reply to Simon Willison

it's fair, I will write up my viewpoint in substantially more detail tomorrow. There is no "intelligence" in LLMs.
Unknown parent

mastodon - Collegamento all'originale
Jeff Atwood
@b_cavello if you are harming the entire industry and everyone in it, fuck yes
in reply to Jeff Atwood

@codinghorror @b_cavello just to clarify, I'm not a "AI is the best thing ever" hype-merchant - I have written extensively about the many downsides and flaws of modern AI

- simonwillison.net/2022/Sep/5/l…
- simonwillison.net/2023/Apr/10/…
- simonwillison.net/series/promp…
- simonwillison.net/tags/ai+ethi…

in reply to Simon Willison

I’m inclined to disagree, but I do think that it’s a bit of a lost battle. I’d rather encourage people to “yes, and” and just get more specific:
aspendigital.org/report/ai-101…
I don’t take issue with the term “AI,” however, and I think that’s a handy alternative. Sisi Wei actually beat me to the punch on this in a recent #TalkBetterAboutAI conversation: youtu.be/KSsxuEtGgEg
in reply to Jeff Atwood

@codinghorror I'm not arguing they there's any intelligence in them here - I'm arguing that reacting to that fact by trying to discourage the use of the term "AI" (which I myself have tried to do in the past) isn't the best use of our efforts
in reply to Simon Willison

@jannem @richardsheridan "AGI is also known as strong AI,[11][12] full AI,[13] human-level AI[6] or general intelligent action" so many weasel words here it's hard to keep count. I will be destroying this tomorrow in context.
in reply to Simon Willison

"so-called AI", "technology marketed as 'AI'", or even just "AI" in quotes, seem to solve the "most people [..] don’t know what it means" issue, while contributing a lot less to the other problem: while "AI is [..] already widely understood", its common understanding is something way beyond what it actually does, which is dangerous for all the reasons we seem to be agreeing on in this thread.
in reply to Brantley Harris

@deadwisdom I think we should keep AI and push AGI for the science fiction version simonwillison.net/2024/Jan/7/c…
in reply to Simon Willison

yes, AI is a broad area of research in computer science and most of the sub-areas have not focussed on Artificial General Intelligence. The science fiction use of the term is far from the bulk of the research in this space. There was a period where Neural Networks fell out of favour due to their black box nature. Now with deep learning advances they are all the rage again, but still limited in their knowledge representation and ability to reason.
Questa voce è stata modificata (11 mesi fa)
Unknown parent

mastodon - Collegamento all'originale
Simon Willison

@deivudesu I'm personally unexcited about this ongoing quest for AGI - I just want useful tools, pretty much LLMs with some of the sharper edges filed off

If AGI ever does happen my hunch is that LLMs may form a small part of a larger system, but certainly wouldn't be the core of it

in reply to Simon Willison

I’m with you on this. Nothing published in the journal Artificial Intelligence in the 50 years of its existence qualifies as “artificial intelligence” in the sense of the word that people concerned about its use impute. That people misinterpret a term used in academic research isn’t something to be fixed by changing academic terminology, but by changing lay understanding of what is and isn’t implied imo. The key thing is increasing understanding of what #LLMs do and don’t do - as you are!
#LLMs
in reply to Simon Willison

@UlrikeHahn
But isn't there a difference between the research and "those things" (i.e. recent consumer products like bing chat etc., which are not research about intelligence but consumer products marketed as intelligent)?
in reply to Simon Willison

"The most influential organizations building Large Language Models today are OpenAI, Mistral AI, Meta AI, Google AI and Anthropic. All but Anthropic have AI in the title; Anthropic call themselves “an AI safety and research company”. Could rejecting the term “AI” be synonymous with a disbelief in the value or integrity of this whole space?"

Rejecting those companies and their business models? Yes. For me "AI" is a marketing phrase and using it to describe #MOLE is doing unpaid PR work.

#mole
Questa voce è stata modificata (11 mesi fa)
in reply to Simon Willison

Counterargument: daniel.haxx.se/blog/2024/01/02…

"AI" as a term, like many other things, was a male ego thing. McCarthy: "I wished to avoid having either to accept Norbert (not Robert) Wiener as a guru or having to argue with him." en.m.wikipedia.org/wiki/Histor…

"AI" is the biggest terminology stretch in the history of computing, and using it is "OK" only because everybody else is doing it, but that's a weak excuse.

in reply to Juan Luis

@astrojuanlu I hadn't seen that quote regarding cybernetics before, that's fascinating!

Simon Willison reshared this.

in reply to Simon Willison

@astrojuanlu some (such as me) might claim everything about AI, not just the name, is a male ego thing. Also cybernetics was about much more than artificial intelligence.
in reply to Simon Willison

@astrojuanlu

I didn't know that either. I can see why one would want to disassociate symbolic AI from cybernetics, but of course there's an irony given where AI ended up. The trend towards connectionism in AI was already well underway by the early 90s, though; considering neural networks as AI is nothing new.

in reply to Simon Willison

I have seen claims that "smart" was used within industry to avoid claims of intelligence after the AI winters. But that term is, of course, also not very informative today.
in reply to Simon Willison

At least my motivation for challenging the term is _not_ that AI is not actually intelligent, but to spark discussion about the level of abuse by AGI proponents. Just look at OpenAI’s mission statement: they are actively abusing what the “I” implies to the general public with a pompous vision, intentionally shifting the meaning of “I”. They should call themselves ClosedAGI instead. We should focus on “Useful Computation”, whatever paradigms that requires.
in reply to Simon Willison

I think there is a point because something has changed. People are suddenly experiencing something uncannily like all the fictional AIs they've read about and watched in movies.

Many people, including plenty I expect to know better are seeing a conversational UX with a black box behind it, as opposed to a few lines of basic, and then make wildly overblown assumptions about what it is. Deliberately encouraged by those using deceptive framing such as 'hallucinations' to describe errors.

in reply to Simon Willison

Using words that have achieved common meaning through time (despite their origin) is how we are able to communicate.

This is a thoughtful justification, but it's also a support of common sense.

in reply to Simon Willison

I always thought that if it's actually intelligent then it would just be AI, Actual Intelligence.
Unknown parent

mastodon - Collegamento all'originale
Simon Willison

@tml yeah, that's a point that could be argued

I think LLMs fit the general research area of /trying/ to get machines to do that - in the same way that creating the LISP programming language was part of attempts to build towards that goal

in reply to Simon Willison

wired: “AI isn’t actually intelligent”
tired: “crypto means cryptography”
expired: “actually it’s GNU/Linux”

in all cases, objectors are correct, but missing the point of general audience (as opposed to technical audience) communication.

in reply to Simon Willison

@jannem @richardsheridan then we need something like the SAE J3106 designations for "intelligence" see blog.codinghorror.com/the-2030…
in reply to Simon Willison

I agree! I wrote a bit about the terminological critique here: sanchom.github.io/atlas-of-ai.…
in reply to Jeff Atwood

@codinghorror @jannem @richardsheridan Absolutely, something like that would help enormously - as it stands, any arguments that "it's not really intelligent" inevitably lead to a debate about what "intelligence" really is, which doesn't appear to have any useful conclusion yet
in reply to Simon Willison

@jannem @codinghorror @richardsheridan but that feels like “this is what we have managed to deliver” - the name seemed to be the general aspiration of what they were trying to achieve.
in reply to Simon Willison

I propose we split off the term “Eh Eye” to refer to the at best useless and at worst harmful hype driven vaporware emerging from the LLM boom, and leave the computer scientists, neuroscientists, philosophers and theologians to argue about the definition of Artificial Intelligence.
in reply to Simon Willison

I feel like LLMs are one of the first technologies where "Artificial Intelligence" sort of applies. GPT4 can do things I cannot do, do tasks which it wasn't explicitly trained on, etc. It's not very good at a lot of this and has obvious limitations. But it seems much harder to explain it away as "just" doing XYZ, as with earlier AI technologies like symbolic calculus, expert systems or statistical classifiers.
Unknown parent

mastodon - Collegamento all'originale
Simon Willison
@bouncing @spacehobo @codinghorror @richardsheridan Adrian named Django after Django Reinhardt because he's really into gypsy jazz - see youtube.com/watch?v=_6CNlqSF1o… for a recent example!
Unknown parent

mastodon - Collegamento all'originale
Ken Kinder
@spacehobo @codinghorror @richardsheridan I don’t think the musician is well known outside of jazz circles, but maybe I’m mistaken. Either way, who cares? It’s a great project.
in reply to Jeff Atwood

@codinghorror @richardsheridan Django Unchained (the movie) came many years later. So unless time travel is a thing, that seems unlikely.

But Jeff, seriously, besides being wrong, you’re being a massive jerk. Even if Simon hadn’t invented what is perhaps the world’s most popular web framework, would it matter? Is this who you are? Yelling “dO yOU kNoW wHo I am!?” to strangers on the internet?

Questa voce è stata modificata (11 mesi fa)
in reply to Ken Kinder

@bouncing @codinghorror @richardsheridan en.wikipedia.org/wiki/Category… ← Odd that the Sean Penn film didn't make it into this list, but it would be weird to claim that as a film reference rather than just a reference to the musician himself at that point.