These days BAQs seem to be getting the best of me. I was working on a fairly complicated BAQ all morning and testing regularly and it was performing very good. Better than I’d expect. It’s looking up transactional data and calculating cost changes over time.
I set that aside for a few hours while I worked on some other things. Now I am ready to resume working on it. Of course, now it times out every time. So I know that the SQL execution does things to my query to “optimize” it. I think it broke itself. Is there a way I can figure out what the execution plan was from this morning and then force it to use that? I’m digging around in SQL Server Management Studio and sort of competent… but I can’t seem to find it. How would I go about finding it amongst all the gazillions of other queries that it processed all morning?
Now that I’ve found a good execution, is there a way I can compare that against the execution plan from a recent execution? Apparently they are not the same so even when I force the plan from this morning, it’s not using it.
I made minor changes since this morning, though I can’t say exactly what. Mostly changing the names of the labels I believe.
Interestingly enough, I feel like I just trained the BAQ to run more efficiently. Not saying it will work in every case but:
I went back through each subquery and changed Top to the building blocks of the query. So step by step, as I build on previous subqueries to get my results, I kept moving the Top one level farther and then testing the BAQ. Basically, troubleshooting to find the “weak link” in the system so I can try to address it. When I got back to the end and ran it all way, it ran just fine again like it was yesterday morning… so weird.
You might have already done this, and it might not apply, but here’s a thing I look at any time a table like PartTran is part of the query.
PartTran, of course, gets so huge after a while, and the fields you normally query on aren’t primary keys.
I put PartTran in its own subquery and filter it with criteria as much as I can and still retrieve the data I need. Maybe that’s limiting the output to certain transaction types and date ranges. For example, if I’m only looking at purchase transactions within the last five years, why deal with all the other data?
I’ve found that if I hook PartTran directly in the top-level query or in a subquery with other tables, it has to do that filtering a bunch of times. Putting it in its own subquery builds the dataset once and allows you to query that smaller dataset instead of the whole table.
Copy that. I follow a very similar approach for the exact reasons you mentioned. I’ve also found that certain things will perform much better as a CTE rather than a subquery. In this case, I had the PartTran filtered to MFG-CUS and MFG-STK only and only for the last 12 months.