Chat GPT, AI, and the use case with Epicor

How is this any different than you? Are you a robot too!?

And what did ChatGPT use to train on? StackOverflow to some extent. What happens when the training data keeps looking at itself? It might provide some insights occasionally but if the information sources dry up, how good will the models get?

2 Likes

Suspicious Monkey GIF by MOODMAN

Us. All of us. They scraped our posts, meant to help our peers as a contribution to the community, and are trying to turn it into their property for their profit. It’s a land grab for our own ideas. The artists out there, to their credit, grasped this immediately the second imagine generators started popping up. It’s going to take a bit for everyone to realize this is inherent to all generative AI.

And industry is intentionally trying to sleight of hand this by writing vague open letters about how this stuff will lead to Skynet and Terminators murdering the children. It’s like the plot of a dystopian cyberpunk novel.

The one saving grace is that these programs do not appear to be that hard to fork and iterate. And the most unscrupulous outfits may not be able to commercialize models built on unlicensed training data, since that data is (on some level) embedded within the model and it’s a statistical probability that it will spit out infringing content.

1 Like

one solution to this might be to use an old trick that has been used for decades… it is called “Salting” the data with known false data that can be traced.

  1. in the mailing list business, we salted our list with a few random names/addresses that were delivered back to us. These addresses were always sent out with our mailing list “rentals”, and they were coded so that we knew the source of the rental. if we saw the address used more than once, we knew someone had broken their agreement to only use our list one time (or they gave/sold it to someone else).
  2. in the MAP drawing business, there are “Phantom settlements” or cities that are inserted onto a map. If that city appears on someone else’s map, then the original creator of the map knows that someone else stole their copyrighted data (see Phantom settlement - Wikipedia).

Perhaps in the same way, we can insert known salted content that would show up when ChatGPT gives an answer giving us a trace back to the source.

1 Like

A few weeks ago I had it write me a poem about how great Epicor is:

In the world of business, speed is key,
And Epicor ERP is as fast as can be.
From inventory to sales and beyond,
It streamlines processes with a single bond.

Its features are vast, its functions robust,
No task too daunting, no hurdle too tough.
With real-time data at your fingertips,
Decision-making is swift, no time for slips.

Manufacturing flows like a well-oiled machine,
From planning to delivery, every step clean.
Supply chain management is second to none,
With Epicor ERP, everything is done.

Time is money, and Epicor ERP knows,
How to make the most of it, as it goes.
It boosts productivity like never before,
Efficiency and speed, always at the core.

So if you seek a system that won’t let you down,
Epicor ERP is the best around.
With its speed and productivity, you’ll surely see,
A world of success, just wait and see.

Well, the AI’s are already sh*tting in their own food dish whenever someone posts GPT outputs and then they scrape the posts for data to train the next model.

Clearly, the answer is to weaponize that with a bot that deliberately and incessantly posts absolute nonsense everywhere about everything until the bots consider the sky being green a good response.

/s

I have used it to translate a resignation letter in the style of Doctor Seuss. I also tried Stephen King, but it was 3x as long…

My wife does this with subscription services. She adds the name of the service as her middle name (“Jane Netflix Smith”). Now when she gets emails (even snail mail!) from random spam vendors addressed to “Jane Hulu” or “Jane Netflix” she knows exactly who is selling her data.

My next salary negotiation email, as written by Tywin Lannister.

This is a great tip. I do something similar - in Gmail, you can add a + and any text after the base email address and it will be delivered. For example, if your email is imauser@gmail.com, you can enter imauser+hulu@gmail.com to track if Hulu sells your information (they do) after you sign up.

2 Likes

There are, “lies, damned lies, and statistics” - Benjamin Disraeli.

1 Like

1 Like

Just today…

  1. i created a fairly complex BAQ in Kinetic (Browser based) BAQ program with summaries, inner sub queries, etc.
  2. tested it
  3. copied the BAQ’s SQL from the generated SQL Display in the BAQ
  4. Pasted it into ChatGPT, asking it to regenerate it into C# Linq statement
  5. results… quickly gave me a very nice C# statement that worked.
2 Likes

Nice one! I wonder how much affect this will have on the 300$/hr consulting market. When the Customer also knows that the Consultants are tapping into GPT or they themselves can attempt it. I know the whole story of “I am charging you for 20yrs of experience, I know where to strike with the hammer.” But Customers dont value it that way, if they did 1 hour of Jose’s time would be paid 1500$ instead they will nickle and dime him for 50$/hr :slight_smile:

at this point it could affect like described here for graphic designers :slight_smile:

2 Likes

LLM’s are actually very good at translating since they’re designed to find patterns in words. You give it the base pattern, and if it can find synonyms in the target language, it can usually reconstruct the original statement in the second language with most of the intent intact. I mean languages in general, not just coding languages.

Having said that, I wouldn’t trust it to spit out optimized LINQ from SQL except for some really basic stuff. It might recognize some common techniques, but you still have to know your data to know how to best write the LINQ query vs SQL.

Probably less than user-accessible data tools like Tableau, or even Epicor’s visual widget-based solutions in BAQ and BPM. Is a chat-prompt really more efficient or intuitive than visual representation? Is inference really a substitute for deterministic logic and rules? Most of the time, no. It is not.

This stuff has a place (I would be sweating if I was in public customer service), but ultimately ERP is about turing deterministic rules and policy into functionality. CUrrent ML tech is, from the ground up, not built to do that. Getting it to do that is going to be a herculean effort. I don’t think the current LLM models are up to the task.

1 Like

I’ll be writing up some AI tutorials.

Here’s the first, I use this method quite a bit haha: Creating a Question Answering Chat Bot For a Set of PDFs (Such as the Epicor Manuals) [Tutorial]

Nice Eli! Moved it or experts corner

1 Like