Issues with REST BAQ

Hey all,

I think we’ve already projected a lot of this info to you via a support case (which i hope has been passed on by now :slight_smile: ) but just in case here’s a direct discussion on it.

So here’s the deal with excel and power query clients + our API Keys. Long story short we needed to add api keys which don’t themselves give you any rights to do anything, but still require a user login so that we can restrict and monitor access by application (say mobile crm) what can be done as well as by user. There is another way to build api keys where the key itself is all you need to log in and make calls and we have that as a maybe on the future roadmap.

The Microsoft folks behind the query component used in excel (and other things) made an assumption that API keys are always authorization keys (no login) and throw an error if you couple up a login + an api key because they think two different login methods at the same time isn’t valid. In our services that’s not correct. We found that out relatively late in our v2 release testing process.

We can’t drop the login + key behavior which we’re going to need for applications that log in with different user accounts and act with those user’s rights, nor can we drop required keys so instead we opted to (for now) direct folks back to api v1. While application integrations are our primary use case for REST we do care about other applications that know how to connect & consume so we do have a few options that may enable this in the future, but have not yet done one. In the mean time the answer is still - use v1 though we understand many of the newer odata consuming apps assume odata v4 which leaves you in a bit of a catch22.

The options just fyi are

  • Add security-imbued api keys. This is what that excel connector is expecting and was on our backlog but we have some security concerns around it so have not yet pulled the trigger. Basically these are erp service accounts bound to an api key + required restrictions on what apis they can call. Easy for api <-> api integration but not appropriate for apps which is what our keys are tuned for now.

  • Add an alternate way to pass the key that wont trigger this bad assumption specifically for these connectors such as accepting the key in the url in some other way.

  • Stop trying to use our existing REST as an ETL source and create our own first class connectors. I will say we’re very aware that our rest services are really not built with bulk ETL or ongoing streaming ETL in mind so even if you make it work it’s still not a great solution in that it’s quite inefficient, but today it’s the only option in SaaS - that won’t be true forever.

Can I ask - what’s your actual data warehouse platform that you’re trying to get data into?

6 Likes

BTW All I thought I’d go ahead and give that CData Odata ODBC Driver a try with our rest services v2 - works great and I’d think it should be a reasonable solution. You just have to know how to fill out the settings to align with our apis:

Here’s an example for the COM-CustJobs baq. Fill out

  • The root url for the baq or business object (don’t include /data, etc BaqSvc/BaqName/ is the root here)
  • use basic authentication with user/pass
  • Under custom headers add your api key X-API-KEY:YourApiKeyHere - this also works if you place the querystring in the custom url parameters like api-key=YourApiKeyHere

    Then you can query the one table “Data” using sql. so here in excel i’m just letting it do select *



So - that’s i think a relatively reasonable way to hook up things that want SQL sources with the apis. nonwithstanding the odata source type in power query fighting our querystring additions.

11 Likes

ps. Had not noticed this comment until just now but as the product owner for data platform including REST, EDA, EDD, etc - definitely not. We’d never do this - your data belongs to you and that’s a core principal of our data platform. To paraphrase Tim O’Reilly - we create more value than we capture by having an open data platform and that’s better for everyone.

Especially in the data warehousing space it’s our expectation that a set of customers want an easy to get rolling with solution that’s not overwhelming for mere mortals (EDA), and a set of customers that want to go best of breed or already have a warehouse which they want to expand and retain rather than starting over.

5 Likes

Oh heck yes on this sentiment. I missed that as well (I was off site last week so catching up on all life atm). One of the major aspects of the REST effort is making Epicor more open and allowing you access to your data - ofc so you don’t want to leave. Why would you bother when you have all the breadth and depth of functionality and can get at your data in any way you want and need?

3 Likes

Thanks Brian a lot of awesome info here!! :raised_hands:

1 Like

@bconner Thanks for all the great information!

I have only heard amazing things about EDA and I believe that it is absolutely the solution for 80-90% of the Epicor customers. There will be a very small group who will build their own solution though.

I think an easy feature would be to add modern endpoints to BAQ Exports. You could start with Amazon S3 buckets and Azure Data Lakes (Gen 2). It would be awesome to add One Drive eventually though.

Last, if you are serious about building a data platform, don’t give up! I would love to see Epicor in the Open Data Initiative or maybe even one day use their GitHub repo! Just remember, things change very very slowly, and then all at once!

2 Likes

FWIW we’re very aware of the CDM and the rest of the azure analytics/data stack data lake included. I just came back from build with a whole new set of new services to play around with in fact. You can see us starting to use select things like IoT Hub in products now.

Anyway better ways to ETL to arbitrary other cloud tools esp for analytics and reporting type use cases are coming eventually, but that’s about all i can say at this moment.

Getting epicor to place stuff regularly onto an official github though… ha well i tried but we’re a big boat to turn. :slight_smile:

5 Likes

@bconner Awesome! I assume that you already did a lot of this CDM work for Azure Search with your Epicor Virtual Assistant product?

Someone tell the XL Connect Team to stop lobbying against this… sarcasm

Thank you for all the info, and keep up the good work.

Thanks Brian. I’m going to try out the cdata driver.

The thing that is so painful is if you use the legacy Excel driver (which is hidden but still available), you can connect to the BAQ Service, then see a list of every BAQ, then pick the ones you want and pull them all into excel. It works exactly like sql and is perfect.
Unfortunately all that stopped working when the 2016 Excel Powerquery driver was introduced. Then (and even with cdata) you can only point to 1 BAQ per connection.

I’m using the Jet Analytics Jet Data manager (an OEM version of TimeXtender) which comes prebundled with most of the cdata driver library.

My data warehouse pulls from about 50 tables. This means I need to ‘hard code’ 50 URL, username & password and API Key pieces of info into the ETL tool. If we want to connect this to another client (we are a Gold Epicor Partner), all of this has to be changed, by hand, 50x without making a mistake.

When working with on-prem clients or when we host and don’t have the cloud locked down like Epicor SaaS, this is not a problem as we put the ETL on the app server which has connectivity directly to SQL Server.

Even Epicor seems to have realized this problem and installs Phocus’s (Vendor for EDA) SQL ETL Extraction program on the App server to support EDA so that it runs direct SQL Queries against the database and securely sends them to Phocus’s cloud for presentation and manipulation.

But Epicor won’t let partners do the same by putting our own ETL there so we are for the most part, unable to help Epicor SaaS customers who want a Microsoft based Data warehouse/BI solution (or any other besides EDA for that matter)

Many other vendors today have fully embraced the Azure based BI world including the CDM and native drivers. Obviously that includes Business Central, Finance & Ops, but also many others. If Epicor would join the party, we could help ease the perception that we aren’t cutting edge anymore

It also appears the problem with trying to use the Service to return the data is you can only do so for one company at a time. If I’m trying to build a solution that can support any client who might have 1, or might have 10 companies in their database, this won’t work unless I built a custom warehouse for every client.

Indeed. Unfortunately there’s a technical reason we had to change the BAQ api in v2 which breaks each baq into it’s own OData service rather than us being able to present them all in one large service root which is what the v1 services do. TBH this doesn’t really help you aside from being assured we’re not just making changes without thinking them through, but here’s the why:

In v1 we make one big odata service for baqs /BaqSvc/ with one entity under it for each baq which is why with v1 you can get a full list of baq, browse that and choose one. Something we don’t love about OData is to really comply with the standard there’s an expectation that there’s a single $metadata document under each service root with the schema for every entity so correct OData would be /BaqSvc/$metadata returns the schema for every BAQ in one call. There are an indeterminate number of BAQs so doing that isn’t feasible. The document is just far too large and slow to respond. So we made a decision that I regretted here to violate the standard a little - we placed /BaqSvc/BaqName/$metadata to break it out and the services refer to it correctly, but /BaqSvc/$metadata doesn’t return anything.

Point being if you connect to it with an odata client that assumes standard $metadata route compliance it fails with our v1 BaqSvc. If you use a client, like the old excel odata connector which does not try to read $metadata it works, but by luck alone.

So in v2 we decided that even though we’d prefer it be one big service we needed to be 100% standards compliant not 98% since we were blocking some odata client libraries and apps from working with our BaqSvc properly. Thus the decision to break each BAQ out into an independent service which actually does what the standard mandates. The downside is - clients need to know exactly which service to hit thus the one baq at a time thing, the /Data entity on each, etc.

If i had my druthers there would actually be a single odata root for the entire set of services so you could brows the whole set but OData doesn’t accommodate that well with the one-giant-metadata-document approach.

Adding /Company/ into all the routes was done for a similar reason - not including company into our routes made many of our api urls non-deterministic including the BAQ urls. What I mean by that is the URL for services in v1 does not contain the full unique identifier to get to the data ll the time - company context is buried in the headers or in the case of the baq an optional query string. That’s caused a lot of surprises/confusion for consumers because if you don’t provide the company then the services give you - last company context used automatically. Meaning - surprise changes in company from call to call.

Moreover the fact that we didn’t enforce company implies that the services do something they do not - return cross company results if you have rights into more than one company. That’s ideally what they would do but our underlying API is one company at a time for the vast majority of services. So adding company into the route is intended to end the surprise context shifts and make it completely clear what the API is going to do.

WRT BAQs since two companies can have two different BAQs with the same name this led to very surprising behavior in the BaqSvc because you might get an entirely different BAQ run in that case and $metadata can’t be predetermined since we’re not including company in the v1 url so the entity id isn’t actually fully unique.

For use cases where you have to manually plug it into a tool that can’t swap out parameters in the URL I get it, this is more work to deal with. I’ll think on if there’s some easier way but i hope it makes sense why we decided we had to make these changes to bring the API to standards and really make it deterministic so there’s no surprises - you get what you ask for every time now, before there were ways to get surprised. I think were I in your position I would be looking to see if it’s possible to write a small script or tool to do my deployment so i could take configuration in, then replace it into all my configs but IDK if that’s possible with the tool you’re using. We have to do that in epicor data analytics - we built our own installation tooling to inject configuration of companies, fiscal calendar, database connection information, etc into our syncs during deployment.

4 Likes

I can’t speak to that one - haven’t been significantly involved in the EVA skills development.

Thanks Brian. That was very helpful.

However with EDA it seems you just bypass this whole issue and do direct SQL statements to the database and then send the data to Phocus’s data warehouse servers.

Then, of course, Epicor allows the Epicor SaaS group to install the EDA extraction program to be installed on the SaaS servers.

But since you won’t allow any other tool to be installed on Epicor’s SaaS servers there is now way for us to emulate your approach and that leaves Epicor SaaS customers with fewer options to get at and manipulate their data.

I think this highlights the options for hosting. If you want that level of control, you can be self hosted or on premise. So the option is there and is perfectly viable.

Yep that’s right. We allow our own applications to run SQL which we can control in the SaaS environments because we can take responsibility for it being safe to run in that environment. @Banderson is correct that for now if you need that level of control there are other hosting options where you can do that.

That said we know there is a strong need for cloud to cloud ETL capabilities with larger volumes than are easy to do over our REST services. We hear you on that point and want to improve the capabilities there, but I can’t as of yet go into any more details than that!

1 Like

Thanks for all of the insight and feedback @bconner

Unfortunately as authorized partners, many of our clients have already taken the SaaS route thanks to a hard sell to move in that direction. (The investors like it too). Not easy to convince an existing SaaS client to move to a perpetual license product and separate hosting fees after they made the investment and if they are multi-tenant, I’m not even sure Epicor can extract a working company database. I know there is no such tool for on-prem clients who want to take a company out of a database, and install as a separate instance (something it should be able to do)

Hi @mchinsky

could you please shed some light on how to do this disabling task?

Seems this should be a separate application for sale.