I know I am late to the party but you are cloud and I have noticed in a recent update we did to 2023.1.5 that suddenly with updateable dashboards Epicor is running saves of the BAQ where it wasn’t before. because you have it running the way you do updating other tables, try running trace logging and see if it is doing something it shouldn’t be. I have one that takes approximately 3 minutes to load and it only pulls one table for updating. I noticed that the check box for multiple dirty rows if checked will cause the slow down on first load unchecked it will happen when closing the dashboard.
This may not be your problem but it is something with a recent update and worth a shot to look into.
NO it is still slow. So far I can’t get the BAQ to return the same results using a CTE. I would like to use a CTE as a kind of snapshot of OrderHed/Dtl/Rel. If I understand correctly, Setting a subquery with theses tables as a CTE makes a reference that is faster to link to than the original tables.
I think I have to go back to the drawing board and redesign the BAQ from the ground up with CTEs in mind.
Think of it like this. The database is a phone book (dating myself here). Every time you look up something, you look it up in the phone book.
A CTE is if you know that you only need certain pages for what you are doing, you make a copy of those 10 pages, and just use those instead of going back to the big phone book every time.
You’ll note that it really only helps if you can cut down the amount of information you need (so less than the whole book) and if you need to use it more than once. So it may or may not make things faster depending on what you are doing.
I’m reminded of Jeff Moden’s RBAR concepts when I see UD02a.ShortChar01 = right(XRevisionNum,2) in a couple of joins. Is query execution iterating through an entire table and performing right() on each for each UD02 row? What happens if you select that table as a subquery that only returns its join fields and a calculated right(XRevisionNum,2) field, then used that as a join intermediate? Or said something like XRevisionNum like '%' + UD02a.ShortChar01 and len(UD02a.ShortChar01) = 2 instead?
For example, when joining Part (or OrderDtl or whatever) to PartTran, BAQs work okay until the number of records start piling up. As mentioned earlier, the BAQ is performing a query on every record where the large table is connected downstream.
When I see a table like PartTran I go ahead and put it in an inner subquery and filter it the best I can to make the list smaller. For example, I’ll filter to “inventory” transactions if I don’t need cost transactions, etc. Or stock-to-material, or whatever it is I’m looking for out of PartTran.
This inner subquery has to run only once instead of a zillion times, and it’s the subset of data I’m looking for.
Then I connect Part to the subquery.
There are probably better ways to do it.
One issue with PartTran, and maybe your UD02, is many of the criteria you place on the table/query involve columns that are not keys to the table–not too efficient, so you want to perform as few queries on it as you can.
The same with anything else where you can cut the number of times something has to fire.
I love you guys! Now we are really getting into some good stuff! I think I am slowly getting it. These are all great ideas. I will post back once I have done some more testing. Thanks!!!
Jeff Moden is known for coining the term “RBAR” which stands for “Row By Agonizing Row”. It is a consequence of coding in a strictly procedural way, rather than in a set-based way. It is different from poor coding; it is the result of adopting a mindset that one always has to tell the computer, step by step, how to do something rather than, in a set-based approach, merely specifying the result one is aiming for.
I am in the process of rewriting the BAQ to use more of the tips you all suggested. I am trying to filter down each subquery as much as possible before joining it to anything. I am also trying to make CTEs out of tables that I link a lot of data to. I got rid of that right() expression and just used a calculated field.
When should I use this? If I do use it, should I only put it in once, run it, then take it back out? Or do I leave it in there forever? So far, I haven’t noticed any difference from this setting.
Thanks!
That setting is to combat parameter sniffing. That’s a whole 'nother long post to explain that. But basically, it means that SQL makes a plan based on how the query is run (for example looking for a single row) and it then sets up for that plan even if the next time you are looking for 50k rows. It’s one of those “Feature, not a bug” things that can really cause some problems. Using that setting (option recompile) says to the system, make a new plan every time I run the query. Making a new plan each time costs something, but usually it’s way more acceptable that using an old plan if there are performance problems. So if you find you are having issues with performance, especially intermittent ones (like it runs fast on way, but not another way) using that might help.
And that’s a super over simplified explanation that is further complicated with how queries actually get run by epicor. So lots of things seem like they are parameter sniffing issues, when they really aren’t, and it’s really hard to track down.
All that to say… trial and error my friend. It won’t really hurt anything, other than a possible small performance hit if you are using it where it’s not needed. If you use it everywhere, it’s going to add up and cause some slow downs.
I wouldn’t worry much about using non-recursive CTE’s over inline table expressions (“inner subquery” in BAQ-ish). There isn’t a consistent generally measurable performance difference between the two. If one’s easier to read or reason about, that’s a good enough reason to go with. Except for recursion, then you’re stuck with CTE.
From the as-executed BAQ examples I’ve seen, query plan reuse and parameter sniffing aren’t a significant issue - they’ve all been executed via sp_executesql with the parameter values spelled out inline rather than initialized first. Actually declaring the parameters may expose them to parameter sniffing and potentially plan reuse related issues, but we don’t have that problem. Instead, we have SQL injection…
Do you know @kananga why the Option(recompile) make a difference then? (I agree with you that it shouldn’t be parameter sniffing, but that seems to help in a lot of cases)
It’s kind of a tossup if it helps or hurts. Sometimes variables in a function or stored procedure are so impactfully inconsistent that it’s less work to build a plan from scratch than go with the existing one based on statistics.
sp_executesql doesn’t give the optimizer as much to work with. It kind of jumps from behind a corner says boo and the optimizer YOLO’s a plan. Not necessarily a good or bad plan, just less effectively considered.
Normally recompile is an execution option I avoid, but so is sp_executesql. In BAQ’s my usual optimizing methods aren’t available so ¯\_(ツ)_/¯ hey why not! If it works it works.