Epicor Rest Helper (Nuget) Updated V2

I don’t know just yet. It’s for a newly added Salesforce integration I wrote to create parts. That doesn’t happen very often – still monitoring.

Jeff, don’t you have unit tests that run against a Development DB? You’re not developing in production, are you?

I’m not sure I understand your question. The Epicor rest client runs as an azure app service that takes rest API calls from our Salesforce instance for the purposes of creating parts in Epicor.

The high memory issue is happening where the rest client lives in the azure app service.

1 Like

I think Adam is asking, if Azure App Service allows one to run multiple deployment slots, and SalesForce allows you to have a test and production instance, and Epicor allows one to run test and production instances, then can you not test creating a new part in the SF Test instance and see if it creates parts in the Epicor Test instance?

1 Like

What Mark is suggesting is an integration test, which is a great final step before deploying to production. Before going that far you would want to test smaller units of the code locally. I would want to prove out critical pieces. If you don’t have a test instance of SalesForce, for example, you could take an example from the SalesForce API and save that in a unit test. Then you can test sending that message to your Epicor Test instance. Once it works, you could then upload it to your Azure App Service. It’s easier to debug memory issues locally than it is in Azure, I’m sure.

Ok, then yes. We originally ran the stack all in their respective development environments, upon which we did our development testing and UAT. We went live and only discovered the memory exception under load when in production.

All that said, it looks like so far that @josecgomez made a good suggestion to set the request log limit.

1 Like

I located the source of the leak. Found this: Memory leak when RemoteCertificateValidationCallback gets attached · Issue #1354 · restsharp/RestSharp · GitHub

… and looks like we’re getting this exact same scenario.

When I look at the memory growth and break down the object diffs between snapshots in debug mode after subsequent calls to the client endpoint, the snapshot jumps in size after each call. There is a single offending object … the referenced RemoteCertificateValidationCallback which seems to strongly align with the case as mentioned in the Github post.

i don’t have RestSharp included in my project. Does the v2 have RestSharp embedded @josecgomez ?

Jeff,
I updated RestSharp library to 108.0.1 when I added the Async methods. According to that issue you linked, the memory leak shouldn’t occur in this version because it’s using HttpClient methods for it’s calls. However that issue was closed without resolution. So nobody confirmed that was the cause or the solution. It seems you are saying it’s not fixed. If you can reproduce the issue in the RestClient maybe you can help get this issue resolved in the upstream library. That’s really our only hope when there is an issue in a dependency. There are some updates to version 108.0.3 that we could try. Not sure if that would help.

@A9G-Data-Droid … BTW, don’t know if it matters, but I’m calling BoPost(…) not BoPostAsync(…) in my static class. And, as I mentioned earlier, the OutOfMemoryException trace shows all the raw calls are BoPostAsync, so I’m assuming you have those wrapped and is why I’m seeing the async variants in the exception trace? Just asking given the context over in the RestSharp GitHub thread.

According to the profiler, those RestResponse objects are being kept alive because they’re referred to by the TaskCompletionSource (via the Task), which in turn is being kept alive because it’s referenced from the CancellationTokenSource, via its list of registered callbacks.

From all this, I gather that registering the cancellation callback for each request means that the whole graph of objects that is associated with all those requests will live on until the entire operation is completed.

So can we re-register the callback inside where you are embedding calls to RestSharp – is this applicable to the problem at hand by chance?

var asyncHandle = _restClient.ExecuteAsync(
        request, r => taskCompletionSource.SetResult(r));

    using (cancellationToken.Register(() => asyncHandle.Abort()))
    {
        return await taskCompletionSource.Task;
    }

Jeff, I wanted the code paths to share as much code as possible. The way I did it led to deadlocks for some people. I have an updated version that fixes this by having the sync calls use the upstream sync and async use the upstream async. That PR has not been approved by @josecgomez . I’m not sure if he’s had the time to review and test. That version might help with your issue. It would at least allow you to try the two different methods and see if they behave differently.

I do not have time to test it this week, I can marge the PR and publish the nuget but I can’t do testing. Not sure if @jgiese.wci can

Ok, that sounds like an interesting test – since we’re running non async.

Out of curiosity @A9G-Data-Droid, based on the GitHub conversation, do you think having the non-async calling the async might have created a problem with the callback that I mentioned above?

We’re excited to get this fixed because right now, I have to restart the Azure App Service after each single call to the endpoint or it starts failing due to the out of memory exceptions.

@Jeff_Owens if you go back to the original issue that you found on GitHub you can see they are saying this happens when you are using an untrusted certificate authority, like a self-signed certificate, on your HTTPS connections. The solution is either:

  1. Buy an SSL certificate for your Epicor server

  2. Issue a certificate from your domain CA, with the domain CA root trust established on all client systems.

The second one is tricky but root trust of your internal domain CA can be pushed via GPO by your domain administrator.

I will look at the PR this weekend (maybe tonight)

@A9G-Data-Droid … We’re Epicor Cloud. I don’t think we would be able to make such accomodations. Right?

@Jeff_Owens cloud customers get a valid SSL certificate so that is not your problem.

Right. So then if that’s not it @A9G-Data-Droid, what about the Cancelation token code I posted about earlier. Does that seem like a viable possibility?

If you have a valid cert then then off the Ignore Cert flag it should bypass that issue all together

We’ve always had the EpicorRest.IgnoreCertErrors = true set that way.

:point_up_2: that’s what I’m referring to from my detailed post above. Relevant? Not relevant?