Well, the AI’s are already sh*tting in their own food dish whenever someone posts GPT outputs and then they scrape the posts for data to train the next model.
Clearly, the answer is to weaponize that with a bot that deliberately and incessantly posts absolute nonsense everywhere about everything until the bots consider the sky being green a good response.
My wife does this with subscription services. She adds the name of the service as her middle name (“Jane Netflix Smith”). Now when she gets emails (even snail mail!) from random spam vendors addressed to “Jane Hulu” or “Jane Netflix” she knows exactly who is selling her data.
This is a great tip. I do something similar - in Gmail, you can add a + and any text after the base email address and it will be delivered. For example, if your email is imauser@gmail.com, you can enter imauser+hulu@gmail.com to track if Hulu sells your information (they do) after you sign up.
Nice one! I wonder how much affect this will have on the 300$/hr consulting market. When the Customer also knows that the Consultants are tapping into GPT or they themselves can attempt it. I know the whole story of “I am charging you for 20yrs of experience, I know where to strike with the hammer.” But Customers dont value it that way, if they did 1 hour of Jose’s time would be paid 1500$ instead they will nickle and dime him for 50$/hr
LLM’s are actually very good at translating since they’re designed to find patterns in words. You give it the base pattern, and if it can find synonyms in the target language, it can usually reconstruct the original statement in the second language with most of the intent intact. I mean languages in general, not just coding languages.
Having said that, I wouldn’t trust it to spit out optimized LINQ from SQL except for some really basic stuff. It might recognize some common techniques, but you still have to know your data to know how to best write the LINQ query vs SQL.
Probably less than user-accessible data tools like Tableau, or even Epicor’s visual widget-based solutions in BAQ and BPM. Is a chat-prompt really more efficient or intuitive than visual representation? Is inference really a substitute for deterministic logic and rules? Most of the time, no. It is not.
This stuff has a place (I would be sweating if I was in public customer service), but ultimately ERP is about turing deterministic rules and policy into functionality. CUrrent ML tech is, from the ground up, not built to do that. Getting it to do that is going to be a herculean effort. I don’t think the current LLM models are up to the task.