What ways are your company using AI successfully?

3 Likes

been lurking this one, but wanted to drop in to mention - We’d love to have a chat with some of you about your views/experiences trying to apply AI in your businesses at some point. Open invitation - give me a ping if interested

3 Likes

I have been seeing pretty solid productivity gains using Cursor (a VSCode fork) for web development. Hoping that Prism or some other AI solution comes to my company that will eliminate the need for the dev team to write ad hoc queries for people, that would be an awesome time saver. That is probably the #1 real scenario that would benefit my company. Outside of the dev team its mostly limited to copilot turning bullet points into a big email, and then on the other end digesting big emails back into bullet points…

:people_hugging:

I use GPT for code in many languages. It is fast and mostly accurate. You still have to understand something about how to code. But it can save a lot of trial and error looking for the right syntax. I usually use ChatGPT for coding.

I use copilot in the Edge browser to draft emails to my staff. Like when I find out about a new vulnerability. Here is an example email I sent my staff a couple weeks ago. GPT had a heavy hand in drafting but I always proofread and quality/sanity check multiple times before I send.

Good morning,
We have seen an uptick in email spoofing. Several users have reported getting emails that appear to be legitimate emails from an internal user. The content of the message is what gives it away as a phishing attempt. While they may not ask you to click a link, they may ask you to respond or change some information. If you suspect the email is not legitimate, flag it with the Phish Alert Button. Please see below for critical information regarding communications IT, HR, and Management.

Information Technology
•	I am the only IT person here at Vermont Aerospace. [IT backup] can also perform some IT duties, but all questions should come to me first.
•	IT will never ask for your password for any account.
•	IT will never ask you to click a link to install or update software yourself.

Human Resources
•	[HR PERSON] is the only person in HR at Vermont Aerospace. [Backup HR] can also perform some of the HR duties, but all questions should go to [HR Person] first.
•	HR will never ask for your password for any account.
•	HR will never ask you to click a link to install or update software.

Management
•	Management will never ask you to purchase gift cards for anything.
•	Management will never ask you for your password for any account.
•	Management will never ask you to click a link or install software without IT support.

If you suspect an email that came from someone at Vermont Aerospace may be a spoofed message, please notify me right away. While we can’t do anything about the spoofing, we can continue to notify our users and our customers that our email is being spoofed.

Please let me know if you have any questions.
Thank you,

I often use this approach when I am reading about a complicated CVE and I want to communicate it in a way that my staff will understand (not IT folks).

Sometimes I use copilot to help me find options for hardware that I am having trouble locating online. For example, I asked to help me find some affordable 60W fiber lasers for marking steel. It gave me options that I was not finding in my regular search. Granted, I am not a purchasing expert, but it gave me a handful of viable options in seconds.

1 Like

I was just trying to get it to turn some 2D prints into a 3D model. Fricking thing is good at making excuses and arguing. By the time you get it to do it correctly you could have just done it yourself. :face_with_steam_from_nose:

So far most people having success seem to be using it for text based things, coding, writing, or getting data out of written text.

MIT just dropped a fresh study:

95% of AI pilots and products are failing. Turns out, the corporate obsession (Epicor included) with shoving shiny new tools into products isn’t exactly a winning strategy.

Here’s how I see it: LLMs are great tools when you use them to go faster. They can repeat patterns like a champ and write code about as well as my most junior engineers. Perfect for that kind of grunt work.

But the inability to control hallucinations or lock down behavior with algorithmic certainty means they’re not very good at much else. Bolting them half-baked into products just to say “we have AI” is basically lighting money on fire.

Just look at the examples: chatbots promising discounts that don’t exist, giving wildly wrong support answers, Avalara’s bot telling me to reinstall my Netsuite extension… for Kinetic. Or Prism proudly informing me that I had 18 orders total in the system because it grabbed the wrong BAQ. Users already struggle with trust during ERP rollouts; now we want to give them a tool that lies to them constantly and confidently?

I’m not bashing Epicor, every vendor is chasing this; but IMO companies, hint hint, would be far better off improving their core product and fixing long-standing issues. Heck, even pointing LLMs at QA and testing would be a smarter, safer use of the tech.

Don’t get me wrong, I use it heavily to make my dev team faster and more productive. But that’s where it stops, at least for me, for a good long while.

While I actively shoo away every executive that reads random linked in articles on the daily who found a neat solution in search of a problem to fix.

12 Likes

Happy Bel Air GIF by PeacockTV

3 Likes

LLM = Large LANGUAGE Model

My (not very) Humble Opinion… they are trained on language. Words. If the set of words in their training is a process document, they learn that along with everything else. If the words are a computer program, they learn that. How each thing they learn is prioritized in giving responses to queries is a chunk (at least) of the secret sauce.

I don’t think a lot of people realize how much work it is to tokenize some problems though. Programming and answering Epicor questions are the only parts of my job I can really understand how AI could be applied to. The mechanical engineering / mfg engineering / people aspects of my job I am a bit of a loss as how to take advantage of AI.

Just a theory but… If you upgrade 2024+ you can use SQL to BAQ, in which case it would be relatively trivial to setup an AI workflow that generates the appropriate SQL that then puts the results to the SQL->BAQ. You could provide major schema in the system prompt or provide a lookup tool or RAG. I’ve been doing something similar with SQL queries based on user input.

Otherwise, you can try giving it the tools to create a BAQ but this would likely be better if fine-tuned with examples making it significantly less trivial.

You could give it a spec of the REST API but the most reliable approach is one tailored to AI.

i.e. based on call context:

  • start_create_baq (name: string)
  • add_baq_parameter (name: string, type: string, value: any)
  • save_baq (no params)

Then you’ve written the code to call the REST API to perform exactly what the tool does.

For both applications - AI models like o3 (med-high), o4-mini (high), GPT-5 (high), Claude Sonnet (3.5 (v2), 3.7, 4.0) perform tool call selection excellently. OpenAI allows reasoning fine-tune, is cheaper, and can be allocated through Azure.

AI knows a lot about calculation methods, material properties, human resources, and processes. Reasoning models can solve or architect solutions to your problems. It can rewrite your abrasive emails, give almost all of teams an extra edge, and more. The gain is productivity from existing staff who effectively apply AI to their work (not just naively).

So we gave chatgpt 5 a spreadsheet of all the sheet metal rectangles we cut for jobs and asked it to come up with 7 pre-cut blanks that would minimize the waste. No luck no matter how much clarification and help we gave it. How would you approach this problem?

1 Like

Hmm, this sounds familiar for some reason…

The real challenge with these is in that context you want to give it about the schema because of the scale of it.

2 Likes

Don’t ask GPT to modify a spreadsheet. Understand the task that needs to be accomplished and then ask gpt to write the code to do it. It can write up mcode for power query editor, it can write VBA (if you’re still into that), and it can write complex excel formulae. This would be my approach. Tell gpt the data you have in the excel spreadsheet, what cells they are in, and what you want. It might get better results. That is a complex problem to solve though!

We gave it a much easier problem after, we gave it the same list of jobs and the rectangle needed, and told it what pre-cut blanks were used and it still couldn’t calculate the scrap for us.

1 Like

In our experience, patterns that can be resolved deterministically should remain this way. AI works great as an advisor in these kinds of scenarios. Try feeding it your use case, maybe even some images, providing it access to search the web. Don’t use AI as a data processor, I would say it isn’t there despite ever increasing capabilities. It can tell you how to process the data, even write the code or queries to massage the data, but it won’t do a great job at raw data.

We also have sheetmetal jobs that require tool optimization and as a contract shop, this is highly dynamic for us. The original ask was how AI could figure out the calculation itself by just being giving the data. Same as your case, this failed completely, and what we’ve ended up with is an evolving algorithm to calculate it that AI wrote.

This aligns with Nate’s experience.

edit:

if you’re using ChatGPT, look at limits. 32k context on GPT5. GPT-5 Thinking is 192k but you do not control the reasoning amount (minimal, low, med, high). The model does 400k token context, probably most performant around 128k.

API is ultimate control.

This went over my head - unless you’re hinting at Epicor doing this same thing or if others on the forum have already suggested this. I just thought it up after reading Jake’s reply. or “this sounds easy” and it wasn’t :joy:

I’ve used the SQL to BAQ exactly 1 time and it worked.

Multi-step LLM calls have been used to optimize tool selection (overloading a system prompt with lots of tool examples / usages plus the actual definitions leads to lower success rates in tool decision making). I imagine a similar technique could be applied to dynamically pull schema into prompt based on analysis of the user query.

Fine-tuning against samples improves success rate. The ability to reasoning fine-tune is a powerful combination.

Maybe try breaking it down into multiple asks. On the first pass, maybe ask it for how many of each blank size have been required? Then ask it to determine the best way to cut the top 5 (or 8 or however many) most common sizes at their typical use percentages from sheet stock.

Maybe worth a shot.

No doubt with enough effort it can be made to happen. I just find it very frustrating that the perception of AI is that its this smart being that you can ask for help, when my experience has been that for anything other than simple things (that google has always worked on..) it actually takes a fair bit of skill to get a useful output.

I mean @zachg and @AlwaysFocus and to a lesser extent @Mark_Wonsil are using AI terms and concepts I am not even familiar or understand, which to me indicates its not a trivial tool, even if it is a useful technical tool to have in your belt.

Zach explained it already but maybe a simpler way to restate is - a core principal of building genai systems is “Only apply genai where no other reasonable programming solution exists.”

For example there is no reasonable non-ml way to turn an arbitrary gradeschool math word problem into the correct math formula to solve it. There is however a great programming solution to run a math formula to calculate the answer. So, you don’t let genai actually do the math (it can’t) you give it a math tool (aka function call) and say - call this program when you want to calculate something and it can do the problem text → formula part then just call the calculator.

So the use case is where I would add some tools to the llms toolkit so that it can get out it’s calculator, so to speak, to figure out the scrap or the precut blanks instead of trying to predict (guess with probability) what the scrap is. Where it value adds is

  • it can help write the tools in the first place - help me write a function that extracts a portion of an excel file as a table given column names… which you then test and keep forever.
  • with the tools in its toolbox to use, then it can take your natural language asks and directly go plan and produce an answer mixing the tools together like a plan that it then runs.
  1. I’ll extract the table from excel with tool: extractTable(sheet, columns)
  2. I’ll use generatePrecutBlanks(count, inputrects) with the extracted table data
  3. I’ll return those to the user with a friendly answer
6 Likes

For some reason that description makes it sound like a very fancy user interface, but it does help re-orient my thinking

1 Like