For now, this is just funny, I laughed. But with the advent of all these new open-source LLMs, it will get worse.
If you thought people were gullible for falling for fake comments from bots, just wait for the near future. Post-factual/Post-Truth era truly begins. The internet that used to be a source of information is no more, only islands of truth will remain (like, hopefully, Wikipedia, but even that is questionable). The rest will be a cesspool of meaningless information. Just as sailors of the sea are experts at navigating the waters, so we'll have to learn to surf the web once again.
The funniest thing for me is how stupidly lazy are these jerks that employ GPT for such things. The printed book example really made me lol.
The simplest thing they could've done is use a service like quillbot to rephrase, just as I used here to rephrase my comment:
-----------------------
I chuckled. For now, this is just hilarious. However, it will grow worse when more new open-source LLMs emerge.
Just wait till the near future if you thought people were naive enough to believe counterfeit comments from bots. The post-factual/post-truth age has arrived. Only isolated truths will remain when the internet ceases to be a reliable source of knowledge (like, ideally, Wikipedia, but even that is debatable). The remaining material will be an ocean of useless data. We'll have to relearn how to navigate the web, just as seafarers are specialists at navigating the seas.
The most amusing thing to me is how exceptionally sloppy these idiots are that use GPT for such things.
The thing is, the internet is already a cesspool of meaningless, misleading and outright malicious information. I guess the earlier we all collectively realize it the best it is.
The internet is currently quite useful, it could stand to be a lot less so. Don’t let your cynicism blind you to the fact that we do have a lot to lose.
At least in my case, I didn't mean to dismiss the internet and that we have nothing to lose. There is some good information there, but it is important that we do not believe everything (or even most) that we read and see.
I am more optimistic here. While LLMs allow you to produce tons of garbage, they also provide the tools to filter through that garbage, something we didn't have before. LLMs allow us to view content in a way that we decided, not the content creator. That's extremely powerful and lets us sidesteps a lot of the old methods used to manipulate us.
The risk is more in the LLMs themselves, as whoever gets to control them gets to decide how people are going to experience the world. For the time being I might still double check all the answers I get from ChatGPT, but overtime the LLMs will get better and I'll get more lazy, thus making the LLMs the primary lens through which one views the world.
> The risk is more in the LLMs themselves, as whoever gets to control them gets to decide how people are going to experience the world. For the time being I might still double check all the answers I get from ChatGPT, but overtime the LLMs will get better and I'll get more lazy, thus making the LLMs the primary lens through which one views the world.
You've underlined the major risk these LLMs are for humanity. For a brief time in the history of human race, after information was democratized, most of us (at least educated people) had to use our own critical faculties to understand the world we live in. Now, that capacity will be outsourced to custom LLMs, most of them derived from other pre-trained with some ideological biases built-in. The informational Dark Ages of the technological era.
If they provide the tools to filter through the garbage, it'll probably be standardized in some way as an interface to the web. So just as HTML and its satellite technologies limit and standardize the representational aspect of information on the web, I think this AI-interface will severely limit the knowledge/wisdom aspect you can derive from information on the web. It's a hard thing to put my finger on, I hope you can understand what I'm saying.
Reflexively then, good comments are good, no matter what produced them. Or is a quality comment impugned by knowing it came from an LLM? Does it cheapen what it means to be human if other humans think highly of an LLMs attempts at English? Is it at all impressive that ChatGPT is able to spell words correctly, given that it's a computer? What does that mean for the spelling bee industry?
Predicting whether a text was written by a LLM or not is not trivial. What was the latest number by OpenAI? 30%? As LLMs get better, it seems like we won't be able to distinguish real text from fake text. Your LLM will be able to summarize it, but it will still be 99% spam.
You don't need to predict if it what written by LLM, if it's a human or machine makes no difference to the validity of a text. You just need to be able to extract the actual information out of it and cross check it against other sources.
The summary that an LLM can provide is not just of one text, but of all the texts about the topic it has access to. Thus you never need to access the actual texts itself, just whatever the LLM condenses out of them.
"just" need to "extract the actual information out of it and cross check it against other sources".
How do you determine the trustworthiness of those other sources when an ever increasing portion are also LLM generated?
All the "you just need to" responses are predicted on being able to police the LLM output based upon your own expertise (e.g., much talk about code generation being like working with junior devs, and so being able to replace all your juniors and just have super productive seniors).
Question: how does one become an expert? Yep, it's right there: experts are made through experience.
So if LLMs replace all the low experience roles, how exactly do new experts emerge?
You're trusting the LLM a lot more than you should. It's entirely possible to skew those too. (Even ignoring the philosophical question of what an "unskewed" LLM would even be.) I'm actually impressed by OpenAI's efforts to do so. I also deplore them and think it's an atrocity, but I'm still impressed. The "As an AI language model" bit is just the obvious way they're skewed. I wouldn't trust an LLM any farther than I can throw it to accurately summarize anything important.
For HN and forums in general, I think this will mean disabling APIs and having strict captchas for posting.
Beyond HN, I think this will translate in video content and reviews becoming more trustworthy, even if it's just a person reading a LLM-produced script. You will at least know they cared enough to put a human in the loop. That and reputation. More and more credit will be assigned based on reputation, number of followers, etc. And that'll be until each of these systems get cracked somehow (fake followers, plausible generated videos, etc.).
Banal is banal, whether written by a human or not.
But GPT text is inherently deceptive, even when factually flawless— because we humans never evaluate a message merely on its factuality. We read between the lines. The same way insects are confused and fly in spirals around light, we will be flying spirals around GPT text based on our assumptions about its nature or the nature of the human whom we presume to have written it.
Bachelors degrees have mostly been a signal for a long time. The problem is that we have credential inflation, so now you need a masters or phd to send that same signal to employers. As a result, you have fewer people going to college, but a greater percentage of people who go to college are getting advanced degrees.
LLMs check the answers? How do they check the answers? By what appears most frequently in the training corpus - that's the "answer".
So, how well curated are the texts that make up the training corpus? Is it just what's generally available on the internet? How much do you think that text accurately reflects reality? "Truth is determined by the most frequent posters" seems like really bad epistemology.
> For now, this is just funny, I laughed. But with the advent of all these new open-source LLMs, it will get worse. If you thought people were gullible for falling for fake comments from bots, just wait for the near future. Post-factual/Post-Truth era truly begins. The internet that used to be a source of information is no more, only islands of truth will remain (like, hopefully, Wikipedia, but even that is questionable). The rest will be a cesspool of meaningless information. Just as sailors of the sea are experts at navigating the waters, so we'll have to learn to surf the web once again.
I'm not sure what rock you've been living under, but this has been the internet for probably longer than a decade by now, the only difference is the volume. Even back before LLMs, or before Facebook, you couldn't take any "fact" at face value when found via the internet. And before that, the same people who fall for it now on the internet, fell for it when watching TV, or reading newspapers. People who are not interested in truth because it doesn't fit their world-view, will never be interested in the truth, no matter what medium it comes via.
I am aware of that. I like to think that millenials/gen-z at least knew a little how to sift through the fake information, and the gullible people were the elders.
But now with such obscene amounts of fake info at every corner, I think the internet and all source of information (even printed! - because printed at least would require significant effort) will loose credibility. Science will be the last bastion, and even that can easily be influenced by money.
> People who are not interested in truth because it doesn't fit their world-view, will never be interested in the truth, no matter what medium it comes via.
Yes, the claim is self-referential in the sense that it describes a certain attitude towards truth and how that attitude can affect one’s openness to new information. Specifically, the claim suggests that individuals who are not interested in truth because it conflicts with their existing beliefs are unlikely to change their minds even when presented with evidence or information that contradicts their views. This can create a self-reinforcing cycle where the individual becomes increasingly resistant to new ideas and perspectives.
The claim is: "People who are not interested in truth because it doesn't fit their world-view, will never be interested in the truth, no matter what medium it comes via."
It is not a suggestion, it does not say "it is unlikely, it is an unequivocal assertion of fact.
> This can create a self-reinforcing cycle where the individual becomes increasingly resistant to new ideas and perspectives.
That's my point (about the thinking underlying the comment in question).
It's interesting how humans self-privilege themselves when applying epistemology - other people's claims must be actually true, but for one's own claims "close enough" is typically an adequate bar. And it is typically only the other person who needs to improve their thinking.
The thing about gippie is it will never shut up, it lists in bullet point fashion, and it uses alot of filler words: like, 'however', additionally, 'currently', 'also', 'that', etc.
i feel i can start to read when someone uses gippie, because i use it alot. I imagine a future where i use gippie to write an email and the receiver uses gippie to summarize and respond. There's also a future evolution of 'typo', where gippie hellucinates some non sensical answer. "Oh my bad, my bots trippin' LOL'
There will be self verifiable truths like provable theorems in axiomatic mathematics. There will be enforceable contracts like Elon musk’s purchase of twitter. There will be quarterly investor reports and earnings calls from public companies that avoid lying at risk of shareholder and sec lawsuits. There will be documents time stamped with hashes and bitcoin. the bots will need karma points as well.
The funniest thing for me is how stupidly lazy are these jerks that employ GPT for such things. The printed book example really made me lol.
The simplest thing they could've done is use a service like quillbot to rephrase, just as I used here to rephrase my comment:
-----------------------
I chuckled. For now, this is just hilarious. However, it will grow worse when more new open-source LLMs emerge. Just wait till the near future if you thought people were naive enough to believe counterfeit comments from bots. The post-factual/post-truth age has arrived. Only isolated truths will remain when the internet ceases to be a reliable source of knowledge (like, ideally, Wikipedia, but even that is debatable). The remaining material will be an ocean of useless data. We'll have to relearn how to navigate the web, just as seafarers are specialists at navigating the seas.
The most amusing thing to me is how exceptionally sloppy these idiots are that use GPT for such things.