- 0 Posts
- 42 Comments
ProfessorScience@lemmy.worldto politics @lemmy.world•Oregon becomes first US state to ban private equity control of doctor and clinical practicesEnglish8·1 month agoI’d love to see more of this.
ProfessorScience@lemmy.worldto Technology@lemmy.world•Former Meta exec (Nick Clegg) says asking for artist permission will kill AI industryEnglish61·2 months agoIf I ran the zoo, then any AI that trained on intellectual property as if it were public domain would automatically become public domain itself.
ProfessorScience@lemmy.worldto politics @lemmy.world•Trump's approval rating on the economy drops to lowest of his presidential career, CNBC Survey findsEnglish22·3 months ago43% of people:
ProfessorScience@lemmy.worldto Technology@lemmy.world•Microsoft tells Windows 10 users to just trade in their PC for a newer one, because how hard can it be?English5·4 months agoI installed linux on my PC a couple months ago. The other day I wanted to log back into my windows partition for the first time in a while in order to clean up some of the files on that partition (even though the drive is mounted in linux, the windows “fast boot” option apparently leaves it in a state that linux considers read-only). Windows apparently wouldn’t let me log in without a microsoft account, instead of just using my regular windows username.
So yeah, that partition’s gone now. No going back!
ProfessorScience@lemmy.worldto Technology@lemmy.world•Judge disses Star Trek icon Data’s poetry while ruling AI can’t author worksEnglish3·4 months agoCherry-picking a couple of points I want to respond to together
It is somewhat like a memory buffer but, there is no analysis being linguistics. Short-term memory in biological systems that we know have multi-sensory processing and analysis that occurs inline with “storing”. The chat session is more like RAM than short-term memory that we see in biological systems.
It is also purely linguistic analysis without other inputs out understanding of abstract meaning. In vacuum, it’s a dead-end towards an AGI.
I have trouble with this line of reasoning for a couple of reasons. First, it feels overly simplistic to me to write what LLMs do off as purely linguistic analysis. Language is the input and the output, by all means, but the same could be said in a case where you were communicating with a person over email, and I don’t think you’d say that that person wasn’t sentient. And the way that LLMs embed tokens into multidimensional space is, I think, very much analogous to how a person interprets the ideas behind words that they read.
As a component of a system, it becomes much more promising.
It sounds to me like you’re more strict about what you’d consider to be “the LLM” than I am; I tend to think of the whole system as the LLM. I feel like drawing lines around a specific part of the system is sort of like asking whether a particular piece of someone’s brain is sentient.
Conversely, if the afflicted individual has already developed sufficiently to have abstract and synthetic thought, the inability to store long-term memory would not dampen their sentience.
I’m not sure how to make a philosophical distinction between an amnesiac person with a sufficiently developed psyche, and an LLM with a sufficiently trained model. For now, at least, it just seems that the LLMs are not sufficiently complex to pass scrutiny compared to a person.
ProfessorScience@lemmy.worldto Technology@lemmy.world•Judge disses Star Trek icon Data’s poetry while ruling AI can’t author worksEnglish2·4 months agoLLMs, fundamentally, are incapable of sentience as we know it based on studies of neurobiology
Do you have an example I could check out? I’m curious how a study would show a process to be “fundamentally incapable” in this way.
LLMs do not synthesize. They do not have persistent context.
That seems like a really rigid way of putting it. LLMs do synthesize during their initial training. And they do have persistent context if you consider the way that “conversations” with an LLM are really just including all previous parts of the conversation in a new prompt. Isn’t this analagous to short term memory? Now suppose you were to take all of an LLM’s conversations throughout the day, and then retrain it overnight using those conversations as additional training data? There’s no technical reason that this can’t be done, although in practice it’s computationally expensive. Would you consider that LLM system to have persistent context?
On the flip side, would you consider a person with anterograde amnesia, who is unable to form new memories, to lack sentience?
ProfessorScience@lemmy.worldto Technology@lemmy.world•Judge disses Star Trek icon Data’s poetry while ruling AI can’t author worksEnglish1·4 months agolol, yeah, I guess the Socratic method is pretty widely frowned upon. My bad. =D
ProfessorScience@lemmy.worldto Technology@lemmy.world•Judge disses Star Trek icon Data’s poetry while ruling AI can’t author worksEnglish2·4 months agoI don’t think it’s just a question of whether AGI can exist. I think AGI is possible, but I don’t think current LLMs can be considered sentient. But I’m also not sure how I’d draw a line between something that is sentient and something that isn’t (or something that “writes” rather than “generates”). That’s kinda why I asked in the first place. I think it’s too easy to say “this program is not sentient because we know that everything it does is just math; weights and values passing through layered matrices; it’s not real thought”. I haven’t heard any good answers to why numbers passing through matrices isn’t thought, but electrical charges passing through neurons is.
ProfessorScience@lemmy.worldto Technology@lemmy.world•Judge disses Star Trek icon Data’s poetry while ruling AI can’t author worksEnglish5·4 months agoSure, I’m not entitled to anything. And I appreciate your original reply. I’m just saying that your subsequent comments have been useless and condescending. If you didn’t have time to discuss further then… you could have just not replied.
ProfessorScience@lemmy.worldto Technology@lemmy.world•Judge disses Star Trek icon Data’s poetry while ruling AI can’t author worksEnglish3·4 months ago“You’re wrong, but I’m just too busy to say why!”
Still useless.
ProfessorScience@lemmy.worldto Technology@lemmy.world•Judge disses Star Trek icon Data’s poetry while ruling AI can’t author worksEnglish61·4 months agoI’m a software developer, and have worked plenty with LLMs. If you don’t want to address the content of my post, then fine. But “go research” is a pretty useless answer. An LLM could do better!
ProfessorScience@lemmy.worldto Technology@lemmy.world•Judge disses Star Trek icon Data’s poetry while ruling AI can’t author worksEnglish3·4 months agoThe only humans with no training (in this sense) are babies. So no, they can’t.
ProfessorScience@lemmy.worldto Technology@lemmy.world•Judge disses Star Trek icon Data’s poetry while ruling AI can’t author worksEnglish91·4 months agoSo, I will grant that right now humans are better writers than LLMs. And fundamentally, I don’t think the way that LLMs work right now is capable of mimicking actual human writing, especially as the complexity of the topic increases. But I have trouble with some of these kinds of distinctions.
So, not to be pedantic, but:
AI can’t create something all on its own from scratch like a human. It can only mimic the data it has been trained on.
Couldn’t you say the same thing about a person? A person couldn’t write something without having learned to read first. And without having read things similar to what they want to write.
LLMs like ChatGP operate on probability. They don’t actually understand anything and aren’t intelligent.
This is kind of the classic chinese room philosophical question, though, right? Can you prove to someone that you are intelligent, and that you think? As LLMs improve and become better at sounding like a real, thinking person, does there come a point at which we’d say that the LLM is actually thinking? And if you say no, the LLM is just an algorithm, generating probabilities based on training data or whatever techniques might be used in the future, how can you show that your own thoughts aren’t just some algorithm, formed out of neurons that have been trained based on data passed to them over the course of your lifetime?
And when they start hallucinating, it’s because they don’t understand how they sound…
People do this too, though… It’s just that LLMs do it more frequently right now.
I guess I’m a bit wary about drawing a line in the sand between what humans do and what LLMs do. As I see it, the difference is how good the results are.
ProfessorScience@lemmy.worldto Technology@lemmy.world•Judge disses Star Trek icon Data’s poetry while ruling AI can’t author worksEnglish3·4 months agoWhat’s the difference?
ProfessorScience@lemmy.worldto News@lemmy.world•‘Cruel and thoughtless’: Trump fires hundreds at US climate agency NOAAEnglish4·5 months agoclimate
changedenial
ProfessorScience@lemmy.worldto politics @lemmy.world•Trump’s new low: Swiping at McConnell’s childhood polio.English10·5 months agoI mean sure, rape is one thing, but this… how can he sink so low as to insult Mitch McConnell? WON’T SOMEBODY THINK OF THE MITCH MCCONNELL?!
ProfessorScience@lemmy.worldto Games@lemmy.world•Large Language Models in Video Games?English422·6 months agoI think using LLMs to provide the dialog for NPCs in a RPG is a use case that’s just begging to happen. Ie townsfolk that don’t just give the same few replies every time, and who react to things you’ve done in the past beyond just whatever prewritten options the developer thought of.
ProfessorScience@lemmy.worldto News@lemmy.world•Scientists say they are close to resurrecting a lost species. Is the age of de-extinction upon us?English14·6 months agoSeems likely that we’ll be better at making things go extinct than un-extinct for a while, yet.
No, half the country voted for this. Or failed to vote against it.