This may have been answered before, but I am having trouble wading through all the similarly worded questions to find the answer.
We are currently On-Prem, for obvious reasons, we are researching what moving to the cloud might look like for us. One of my biggest fears is losing access to the SQL Database directly. Yes, I know there are BAQs, and Yes, I know there is a read-only synched version for local access, but both have SIGNIFICANT drawbacks for some of the solutions we have developed over the years of having the database here.
For example, we have an in-house developed Fulfillment application. It builds a Queue of Consumer orders and prioritizes them and hands them out to Pickers based on time, customer priority, availability, etc. In order to keep this queue accurate for things like Held orders, cancelled orders, or priority changes, etc., it looks at the Epicor SQL Database OrderHed table about every 5 seconds for records to load. This Look is exceptionally efficient and takes on average <0.3 seconds and places no data locks (of consequence) on the table. Then based on the cursory findings, goes and loads the orders with a slightly deeper look at the table if needed. To do this across the board for our custom apps with BAQs would require approximately 500 BAQs to be created and maintained, and even then I cannot run a BAQ in .3 seconds. The overhead of the app handling the request is significantly greater than SQL Access. The Local Copy is too stagnant and as I read on here, too prone to breaking. If it broke overnight, we could end up with massive fines before someone caught on that it broke and got it fixed. For instance on Black Friday we can easily handle 6,000 orders through this custom application. An hour of dropped orders could easily cause thousands in late ship fines.
All Epicor Transactions are handled through REST calls, we never directly edit or write data into SQL, but we often populate those REST calls from Direct SQL read data. We do not open the SQL Database for general ODBC Access to users, only to Applications controlled by IT.
Anyway, TLDR, I see comments that Epicor discourages Direct SQL Access, but no definitive answer to if it can be done anyway in a Cloud environment (Not multi-Tenant obviously).
In typical cloud SaaS you have no directed access to the SQL database. There is the read-only and BAQs which both have drawbacks as you said. Integrations are best using REST API calls.
That’s why many, even some of us current SaaS customers, are wanting a compromise to going cloud SaaS via Containers.
You can do this with the read only database in cloud. Its effectively real time (I have never been able to catch data in the read only database out of sync with the UI).
I also question the 500 BAQ comment - unless you have 500 SQL queries. They should map almost 1 to 1. (In fact they have a converter now, but I haven’t used it much..) But I think its a moot point, I don’t think the current design he has is cloud friendly.
Oh, agreed, current design is not Cloud Friendly, as we had always been promised we could have our database On-Prem, till this year.
As to the 500 BAQs, Because we know the impact of pulling too much data, literally every call to the database is for only the exact fields necessary for that call with the best Joins and Filters to encourage the best Index to use to get the data as fast possible with the least amount of resources needed to pull that data, we rely heavily on Execution Plan review and optimizations for these queries. So 1 call to get customer data is just CustID and Name, a different call might also include GroupCode, so 2 different SQL queries used. When poking at the database as much as we do, it needs to be excessively quick or we can easily bring the database server to its knees. To add the overhead of the REST Session alone will add nearly 300% of overhead usage (I have tested it) to our custom applications.
So, yes, we could merge a bunch of those and use a more generic Query in the BAQ to handle more edge cases, to cut down the total number of BAQs it would just cost us more resources. Some of our custom applications could probably handle the slowdown without a huge impact to our users, but others might be devastating.
Given your requirements, this would be my approach.
Strip the custom apps down to their business requirements. If past you was nice to future you, you’ve got a somewhat up to date documentation to start from. If not, reverse engineering your past work is fun to call out things like “who did that? it couldn’t have been me…”
Due to latency/uptime requirements, use a single source of truth. Build this out in Epicor as some combination of a Dashboard/Function/Schedule/Data Directives/Method Directives/BAQ. If your queries are long running, consider running them as cubes or scheduled functions dumping data into a temporary location for later reference. You may consider writing Advanced BPM uBAQ code or using function code against the Db object and tables directly (only for read only things of course) - you can directly affect how the data is chunked up this way
Sell the labor cost to the higher ups as increasing your reliability and accessibility by reducing/removing IT maintenance costs of the external apps and linking your uptime to Kinetic.
I have not, it would require a very significant rewrite of the way the program functions to get the benefit of that, though researching it will be on my list of things to do if I really can’t get direct SQL Access.
Well, as people mentioned you can get a read only copy for extra $$$ but you are still not going to be accessing it locally so I don’t know how performance compares.
@aosemwengie1 Have you had any opportunity to compare cloud read only db performance to local db?
Do you know if the slowness is establishing the connection, the round trip of the request is slow, or the processing of the query is slow? I could possibly handle the first two issues with how I manage establishing and maintaining the connection, the third, not much can be done. Ok, if you don’t know, just thought I would check.
Linked servers are always slow. I think you can establish direct access which would be faster, but you have to have a stable ip for the cloud team to whitelist.
If you have system catalog read permissions, server resources and their performance would be queryable to identify bottlenecks you might be able to work around. If so this would likely be in the ballpark of a good starting place:
We’re using the read only database as well. like @aosemwengie1 said we havent seen any sync issues and I understand it is real time and not scheduled sync.
It is designed for reporting, works great for that. As far as responsiveness I haven’t tested beyond our needs. Happy to test for you.
The connection is limited to ODBC.
We’re using Access/ODBC for a lot of our reporting…stop laughing, it usually works. Response time on pulling data directly from Kinetic tables is pretty good. Performance hits seem to come in on more complex queries especially if they involve local tables. Workarounds have been to create local temp subsets of Kinetic data and then reconnecting.