I already debated this on HN when this was posted two days ago, but this paper is not peer-reviewed and is a draft. The examples it uses of DOGE and of the FDA using AI are not well researched or cited.
Just as an example, they criticize the FDA for using an AI that can hallucinate whole studies, but they don't talk about the fact that it's used for product recalls, and the source that they use to cite their criticism is an Engaget article that is covering a CNN article that got the facts wrong, since it relied on anonymous sources that were disgruntled employees that had since left the agency.
Basically what I'm saying is the more you dig into this paper, the more you realize it's an opinion piece.
Not only it's an opinion piece disguised as scientific "Article" with veneer of law, it has all the hallmarks of quackery: flowery language full of allegory and poetic comparisons, hundreds of superficial references from every area imaginable—sprinkled throughout, including but not limited to—Medium blog posts, news outlets, IBM one-page explainers, random sociology literature from the 40's, 60's and 80's, etc.
It reads like a trademark attorney—turned academic got himself interested in "data" and "privacy," wrote a book about it in 2018, and proceeded to be informed on the subject of AI almost exclusively by journalists from popular media outlets like Wired/Engaget/Atlantic—to bring it all together by shoddily referencing his peers at Harvard and curiously-sounding 80's sociology. But who cares as long as AI bad, am I right?
I'm finding it hard to identify any particulars in this piece, considering the largely self-defeating manner in which the arguments are presented, or should I say, compiled, from popular media. Had it not been endorsed by Stanford in some capacity, and sensationalised by means of punchy headline, we wouldn't be having this conversation in the first place! Now, much has been said about various purported externalities of LLM technology, and continues so, on a daily basis—here in Hacker News comments, if not elsewhere. Between wannabe ethicists and LessWrong types, contemplating the meaning of the word "intelligence," we're in no short supply of opinions on AI.
If you'd like to hear my opinion, I happen to think that LLM technology is the most important, arguably the only thing, to have happened in philosophy since Wittgenstein; indeed, Wittgenstein presents the only viable framework for comprehending AI in all of humanities. Part because it's what LLM "does"—compute arbitrary discourses, and part because that is what all good humanities end up doing—examining arbitrary discourses, not unlike the current affairs they cite in the opinion piece at hand, for arguments that they present, and ultimately, the language used to construct these arguments. If we're going to be concerned with AI like that, we shall start by making effort to avoid all kinds of language games that allow frivolously substituting "what AI does" for "what people do with AI."
This may sound simple, obvious even, but it also happens to be much easier said than done.
That is not to say that AI doesn't make a material difference to what people would otherwise do without it, but exactly like all of language is a tool—a hammer, if you will, that only gains meaning during use—AI is not different in that respect. For the longest time, humans had monopoly on computing of arbitrary discourses. This is why lawyers exist, too—so that we may compute certain discourses reliably. What has changed is now computers get to do it, too; currently, with varying degree of success. For "AI" to "destroy institutions," or in other words, for it doing someone's bidding to some undesirable end, something in the structure of said institutions must allow that in the first place! If it so happens that AI can help illuminate these things, like all good tools in philosophy of language do, it also means that we're in luck, and there's hope for better institutions.
You seem to be relying too heavily on your own "language games". For instance, flip flopping between using "LLM technology" and "AI" to refer to what appears to be the same thing in your argument. I find it all quite incomprehensible.
> If you'd like to hear my opinion, I happen to think that LLM technology is the most important, arguably the only thing, to have happened in philosophy since Wittgenstein;
So, assume cognitive bias and a penchant for hyperbole.
> LLM technology is the most important, arguably the only thing, to have happened in philosophy
Why would "LLM technology" be important to philosophy?
> arguably the only thing, to have happened in philosophy
Did "LLM technology" "happen in philosophy"? What does it mean to "happen in philosophy"?
> indeed, Wittgenstein presents the only viable framework for comprehending AI in all of humanities.
What could this even mean?
Linguistics would appear at least one other of the applicable humanities to large language models.
Wittgenstein was famously critical of Turing's claim that a machine can think to the extent he claimed it caused Turing to create misunderstandings even in his mathematics.
Wittgenstein also disliked Cantor. and even the concept of 'sets'.
I am struggling to see how this all adds up to being the "only viable framework for comprehending AI".
> If it so happens that AI can help illuminate these things, like all good tools in philosophy of language do, it also means that we're in luck, and there's hope for better institutions.
This is a wild ride.
So, "AI" exploits weaknesses in institutions, but this is different from "destroying institutions", and its a good thing because we can improve the institutions by fixing the exploitable areas; which is also a wholly speculative outcome with many counterexamples in real life.
Reads like: "Sure, I broke your window and robbed your store, but you should be thanking me and encouraging me to break more windows and rob more people because I illuminated that glass is susceptible to breaking when a rock is thrown at it. Oh, your shit? I'm keeping it. You're welcome."
My writing could be erratic sometimes, but "flip flopping" is a bit unfair, don't you think? When they say "AI," I assume they mean LLM technology and its applications above all else; the so-called "intelligent agent" discourse is a big one, but it's important to remember why it works in the first place. Well, because the pretraining stage is already capturing all the necessary information, right? Moreover, mechanistic studies show that most significant info is preserved in the dense layers, not attention heads. So there's something very fundamental, albeit conceptually simple—going on that allows for a whole bunch of emergent behaviour, enabling much more complex discourses.
> Why would "LLM technology" be important to philosophy?
Well, because it has empirically proved that Wittgenstein was more or less right all along, and linguists like Chomsky (I would go as far as saying Kripke, too, but that's a different story) were ultimately wrong! To put it simply: in order to learn language, and by extension, compute arbitrary discourses, you don't need to ever learn definitions of words. All you need is demonstrations of language use. The same goes for syntax, grammar, and a bunch of other things linguists were obsessing about for decades, like modality. (But that's a different story altogether!) Computer science people call this the bitter lesson, but that is only a statement on predictive power, not emergent power. If it only ever were the case for learning existing discourses, that wouldn't be remotely as surprising. Computing arbitrary discourses is a much stronger proposition!
> Did "LLM technology" "happen in philosophy"? What does it mean to "happen in philosophy"?
LLM's were a bit of a shock, and a lot of people are not receptive to this idea that Wittgensteinians won, basically, game over. There will be more flailing, but ultimately they will adapt. You can already see this with Askell and other traditionally-trained philosophy people adopting language games, it's only that they call it alignment. Neither a coincidence she went to Cambridge. It will take a bit of time for "academic philosophy" to recognise this, but eventually they will, because why wouldn't they?
Game over.
> Linguistics would appear at least one other of the applicable humanities to large language models.
Yeah, not really. All the interesting stuff that is happening has very little to do with linguistics. There's prefill from grammar, but it would be a stretch to attribute it to linguistics. In linguistic literature, word2vec was big time for the time being, but they did fuck-all with it ever since. I'm not trying to be hyperbolic here, either.
> Wittgenstein was famously critical of Turing's claim that a machine can think
I never understood this line of reasoning. So what Witt. and Turing had disagreements at the time? Witt. never had a chance to see LLM's, or anything remotely like it. This was unexpected result, you know? We could have guessed that it would be the case, but there were no evidence. We still don't have a solid theory to go from Frege to something like modern LLM's, and we may never will, but the evidence is there—Wittgenstein was right about you need for language to work.
> Wittgenstein also disliked Cantor. and even the concept of 'sets'.
I don't see what this has anything to do with?
> So, "AI" exploits weaknesses in institutions, but this is different from "destroying institutions", and its a good thing because we can improve the institutions by fixing the exploitable areas; which is also a wholly speculative outcome with many counterexamples in real life.
I never said AI "exploits" anything. I only ever said that being able to compute arbitrary discourses opens so many more doors than what's a pigeonhole insinuation like that would entail. What wasn't obvious before, is becoming obvious now. (This is why all these people are coming out with "revelations" on how AI is destroying institutions.) And it's not because of material circumstance. Just that some magic was dispelled, so stuff became obvious, and this is philosophy at work.
This is real philosophy at hand, not some academic wanking :-)
Again, I find it very difficult to get past your own personal "language game"s.
> Game over.
Is a perfect example. What "game" is "over"?
Chomsky's philosophical linguistics have long been derided and stripped for parts, and he was friends with Epstein and his cohorts so he can fuck right on off to disgrace and obscurity, but his goals within linguistics, as I understand them, were to identify why humanity has its faculty of language.
Wittgenstein was uninterested in answering the same question, and large language models are about as far from an answer to that question as one can get.
So, again, I am unsure what has been settled to the point of decrying "Game over".
Does this game only have two "teams"? One possible "outcome"?
Who's on what side of the "game"?
What have they said that shows their allegiance to one idea, and what have they said in opposition to the other?
What about large language models either support or contradict, respectively, said ideas?
As a huge fan of the ideas and writings of Wittgenstein I find it hard to believe that there are contemporary 'philosophers' who disagree with his ideas, namely that words take on meaning through context, but there are certainly trolls and conservatives in every field.
> an Engaget article that is covering a CNN article that got the facts wrong, since it relied on anonymous sources that were disgruntled employees that had since left the agency.
It is inaccurate though. Those employees never used the system, and incorrectly cited what it is used for. I did some legwork before I drew my conclusions.
EDIT: citing some resources here for those that are curious.
This is what drafts for. It's either a very rough draft with some errors and room for improvement, or a very bad draft sitting on the wrong foundation.
Either way, it's an effort, and at least the authors will learn to not to do.
No, it’s definitely not what drafts are for. Fundamental issues of the nature pointed out by the parent comment are way too serious to make it into a draft. Drafts are for minor fixes and changes, as per the usual meaning of the word draft.
> Institutions like higher
education, medecine, and law inform the stable and predictable patterns of
behavior within organizations such as schools, hospitals, and courts.,
respectively,, thereby reducing chaos and friction.
Hard to take seriously with so many misspellings and duplicate punctuation.
I vibe with the general "AI is bad for society" tone, but this argument feels a lot to me like "piracy is bad for the film industry" in that there is no recognition of why it has an understandable appeal with the masses, not just cartoon villains.
Institutions bear some responsibility for what makes AI so attractive. Institutional trust is low in the US right now; journalism, medicine, education, and government have not been living up to their ideals. I can't fault anyone for asking AI medical questions when it is so complex and expensive to find good, personalized healthcare, or for learning new things from AI when access to an education taught by experts is so costly and selective.
> Hard to take seriously with so many misspellings and duplicate punctuation.
Very bad writing, too, with unnecessarily complicated constructions and big words seemingly used without a proper understanding of what they mean (machinations, affordances).
It's funny how many of us know the shortcomings of AI, yet we can't be bothered to do the thing ourselves and read or at least skim a in-depth research paper to increase our depth.
Even if we don't agree with what we read, or find its flaws.
Paradox of the century.
P.S.: Using ChatGPT to summary something you don't bother to skim while claiming AI is a scam is the cherry on top.
I read the entire paper a couple of days ago and have done a lot of work to critique it because I think it is flawed in several ways. Ironically, this AI summary is actually quite accurate. You're getting down voted because posting AI output is not condoned, but that doesn't mean that in this case it is not correct.
They're getting downvoted because without even taking a look at the paper, they felt that "please create a summary of the stupid, bad faith, idiot, fake science paper" is a reasonable way to ask for a summary.
The link to download the paper is here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5870623