ChatGPT = Holy F***ing S**tšŸ˜±

i know people like that :slight_smile:

Dang it @Olga it was ONE TIME!!! :joy::joy::joy::joy:

You promised to keep it a secret :pleading_face:

1 Like

And THAT, my friends, is the issue.

Itā€™s the same with people. Some people I trust implicitlyā€¦ which I have learned over time to be accurate. Some other people I ā€œtrust but verifyā€ā€¦ and the list goes on.

IBM Watson is doing this very thing, but it is thus far limited to purely data-based pursuits (you can argue whether weather forecasting is ā€œfactualā€, but it is indeed data-based).

Just because ChatGPT says it doesnā€™t mean itā€™s the right answer, or even the only answer.

1 Like

Eventually one will be asked, ā€œIs there a God?ā€ And it will answer, ā€œThere is now.ā€

And for real, checkout some of Robert Miles videos on AI safety.

1 Like

schitts creek hello GIF by CBC

Chess was solved with heuristics. Which is not the same thing as ML and the two should not be conflated. They are fundamentally different technologies with different pitfalls.

I think youā€™re being imprecise here. ā€œChatGPTā€ is a natural language model. Its whole thing is stringing together statistically likely words. It is Googleā€™s auto-complete on steroids. It inherently has no concept of facts or truth. There is an ongoing debate between people who believe that we just need bigger training sets and more processing power and that will be good enough to fake it. There is also evidence that approach isnā€™t scaling like the Moar Data people think it should. Truth remains elusive.

The opposing side think we need to build entirely different architectures that can actually model the world and develop a sense of what is a fact and what isnā€™t. Yet it is unclear what that would actually look like in practice or how well it would perform.

Right now thereā€™s enough money chasing this tech that we should get some kind of answer on this at some point. Though, people also threw billions at blockchain and crypto and that went nowhere. Investment is no guarantee of success.

Well, thatā€™s actually the biggest problem here. If youā€™re not concerned about truth, or critically, are interested in diluting truth, this tech is ready for primetime. It gives plausible answers, which is awesome for those wanting to distribute disinformation. It can replace troll armies. It can generate deep fakes. It can saturate the internet so that even earnest AI projects are now ingesting disinfo at a massive scale, creating an ouroboros of bullshit.

2 Likes

Oh, the billions went somehwhereā€¦ just not to the people that the majority wanted it to go. :wink:

1 Like

Yeah, it probably isnā€™t entirely on-the-level for me to compare ML to a straight-up scam just because the two are both tech related. It was just the first thing that popped into my head. Very little money went into developing the underlying technology of blockchains. All the cash went chasing after tokens and the ability to mine and exchange them.

Ture totally different but it was a belief by some that a computer wouldnā€™t be a chess master. Which seems completely silly today. Humans are --generally-- clever little monkeys and I donā€™t have a crystal ball.

Youā€™re right. I said ā€œChatGPTā€ when I intended to say, ā€œChatGPT Likeā€. I donā€™t expect a natural language model (NLM) alone to be trained into a full AI, but a NLM combined with other AI technologies someday in the future. I too have read articles on the ā€œMoar Dataā€ debates and I side a bit in the middle. I see a NLM combined with a ā€œtelling the truthā€ model not just a bigger NLM.

well, this thread

Over My Head Reaction GIF by MOODMAN

image

A side bar here ā€¦

My son is autistic (but high functioning), but struggles with language. When he was a child and I said something that turned out to be incorrect, heā€™d accuse me of lying. We had to teach him that a lie requires some level of intended deceit.

So abstracting that to AI (which I donā€™t think ChatGPT is considered) it is never going to lie to us.

HAL did. Thatā€™s why he had to kill them so he wouldnā€™t have to keep lying!

TBF Wonka wasnā€™t fun either and ran an unsafe workplace, possibly staffed with slave labor. The only reason anyone wanted to hang out with Wonka in the first place was because he was stupid rich.

Conflating ML models with general purpose AI is an even bigger leap than Deep Blue to ChatGPT. Iā€™m not trying to pick on you, really. These chat and image bots have sucked up a lot of oxygen lately and the hype train has been running over everything.

2 Likes

Candy. Iā€™m in it for the candy. Bathe me in that sweet chocolate river.

2 Likes

Good news for you is chocolate tends to have a SG of 1.02 you should float quite well and be safe!

1 Like

I donā€™t know how to feel about the fact that you just ā€œWell actualliedā€ my meme
image

Ironically, Iā€™d have cited the Futurama Slurm Factory episode if our posts were required to stick to the MLA handbook.

I mean, I wouldnā€™t even care if I go out by chocolate asphyxiation. Could be worse. :woman_shrugging:t2:

I think this is the 3rd or 4th utter dumpsterfire of a thread this yearā€¦ Is there something in the water? :rofl:

Fire Trash GIF

4 Likes