AAAARRGH MRP is failing for us today and I need to look at the MRP logs to see why. Of course, Kinetic cloud has decided to crash when attempting to use the server file download to get to the MRP logs. Apparently, ours has not been cleaned out for over a year and that is what is causing the browser to crash. Now, I have to wait for the Cloud team to clear out old logs vs on prem where I could easily browse to the logs I needed.
Those Ideas were created in 2024… Come on Epicor that MRP one has a heap of votes…
Ok if they don’t put some guard rails, it is going to blow up their db, and in actual fact when it comes to logging…I heard someone say DataDog recently…
First steps would be to get all of these logs into some standard format I guess.
What irks me is that the list of log files for download is not at least sortable, and there are often multiple files for the same date, so I have to hunt thru the list to find them.
The thing is how do you contain the costs… and provide the flexibility… So I’m assuming there a conflicting interests.
Tables might sound old school, but it may be an interim step. It does however give visibility easily for the business rather than, yet another tool that needs to be managed, maintained and paid for. And did I say learnt…
With that being said it’s all about being to cut your cloth to suit your measure, you may not be a big shop so a set of dashboards may be all you need. Then may be a bigger place that has a dedicated operations team that uses consolidated like Datadog or MS Application Insights to measure and manage many different types of logging data.
The trick with logging is making log storage as ignorant of its contents as possible. In this case, dropping logs in the database should actually reduce technical commitments and exploitable surface.
Madness lies beyond any log table structure more complicated than
create table Ice.Logs(ID int identity(1,1) primary key, LogSource varchar(128) not null, LogText varchar(2048) not null, LogDatetime timestamp not null default current_timestamp)
because the text of logs is arbitrary and capricious.
Logs and their parsing are technical enough that it would be totally appropriate to provide no dedicated UI at all. Depending on SQL via BAQ to read logs is usually a lower barrier than parsing the logs.
Dumping logs in a naive table can eliminate a UI and its upkeep, which is nice for the developer team. Unless they’d get in trouble for reporting negative LOC but surely no one manages by LOC any more, right?
Server admins always like keeping user’s grimy paws out of their server’s directories.
Most modern logging systems allow multiple sinks, so it’s not a matter of picking one place anymore. We can put them in a table, on the server’s file system, and asynchronously to an immutable remote blob storage.
There just needs to be a simple and easy way “Out of the box” that meets the basic need. Hey production team member your in charge of looking at the MRP logs today, just to ensure there were no problems.
They log into Epicor, and theirs the dashboard… Oh yeah and the errors are in a different shade…
Sifting through text files or having to roll a python script with my Jupyter notebook is not really the level I was going for.
Which is how this thread started. It’s too complicated to retrieve logs.
I don’t think there is one solution to rule all here. Putting a week’s worth of MRP logs in the database? Sure, why not? Five years worth? No, thank you. Having developers accessing files on the server? Not if you can avoid it. Having developers have a window open to have logs streamed to their session? Maybe. Keeping detailed logs in less expensive immutable storage for performance, audit, or security analysis. Why not? This is why I propose a multi-sink approach to logging.