Error Running Function - 'Connection was closed'

Hi,

I am running a function using the ‘Schedule Epicor Function’ screen. It is a function that was created by the Epicor Custom Solutions Group.

It is failing in System Monitor with a strange error, which I’ve copied below. This seems to say the process lost connection to the server, but Support cannot explain why. For reference, we are a public cloud customer.

Anyone seen this before?


Executing library ‘CsgPegBuild’ function ‘buildPeggingFile’
“kinetic”: An error occurred trying to run task ID 950616 for agent “SystemTaskAgent” on the application server (User: “ZanebK”, Task Description: “eFX: CsgPegBuild.buildPeggingFile - The main funct”).
Error details:
System.Net.Http.HttpRequestException: An error occurred while sending the request. —> System.Net.WebException: The underlying connection was closed: A connection that was expected to be kept alive was closed by the server. —> System.IO.IOException: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. —> System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host
at System.Net.Sockets.Socket.EndReceive(IAsyncResult asyncResult)
at System.Net.Sockets.NetworkStream.EndRead(IAsyncResult asyncResult)
— End of inner exception stack trace —
at System.Net.Security._SslStream.EndRead(IAsyncResult asyncResult)
at System.Net.TlsStream.EndRead(IAsyncResult asyncResult)
at System.Net.Connection.ReadCallback(IAsyncResult asyncResult)
— End of inner exception stack trace —
at System.Net.HttpWebRequest.EndGetResponse(IAsyncResult asyncResult)
at System.Net.Http.HttpClientHandler.GetResponseCallback(IAsyncResult ar)
— End of inner exception stack trace —
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.Extensions.Http.Logging.LoggingHttpMessageHandler.<g__Core|5_0>d.MoveNext()
— End of stack trace from previous location where exception was thrown —
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.Extensions.Http.Logging.LoggingScopeHttpMessageHandler.<g__Core|5_0>d.MoveNext()
— End of stack trace from previous location where exception was thrown —
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Epicor.ServiceModel.Channels.ImplBase.d__127.MoveNext() in C:_releases\ICE\ICE4.3.100.0\Source\Shared\Framework\Epicor.ServiceModel\Channels\ImplBase.cs:line 1093
— End of stack trace from previous location where exception was thrown —
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Epicor.Utilities.AsyncHelper.RunSync[TResult](Func1 method) in C:\_releases\ICE\ICE4.3.100.0\Source\Shared\Framework\Epicor.ServiceModel\Utilities\AsyncHelper.cs:line 16 at Epicor.ServiceModel.Channels.ImplBase.<>c__DisplayClass123_0.<CallWithCommunicationFailureRetry>b__0(Context _) in C:\_releases\ICE\ICE4.3.100.0\Source\Shared\Framework\Epicor.ServiceModel\Channels\ImplBase.cs:line 1014 at Polly.Policy1.<>c__DisplayClass13_0.b__0(Context ctx, CancellationToken ) in //src/Polly/Policy.TResult.ExecuteOverloads.cs:line 42
at Polly.Retry.RetryEngine.Implementation[TResult](Func3 action, Context context, CancellationToken cancellationToken, ExceptionPredicates shouldRetryExceptionPredicates, ResultPredicates1 shouldRetryResultPredicates, Action4 onRetry, Int32 permittedRetryCount, IEnumerable1 sleepDurationsEnumerable, Func4 sleepDurationProvider) in /_/src/Polly/Retry/RetryEngine.cs:line 63 at Polly.Retry.RetryPolicy1.Implementation(Func3 action, Context context, CancellationToken cancellationToken) in /_/src/Polly/Retry/RetryPolicy.cs:line 74 at Polly.Policy1.Execute(Func3 action, Context context, CancellationToken cancellationToken) in /_/src/Polly/Policy.TResult.ExecuteOverloads.cs:line 82 at Epicor.ServiceModel.Channels.ImplBase.CallWithCommunicationFailureRetry(String methodName, ProxyValuesIn valuesIn, ProxyValuesOut valuesOut, RestRpcValueSerializer serializer) in C:\_releases\ICE\ICE4.3.100.0\Source\Shared\Framework\Epicor.ServiceModel\Channels\ImplBase.cs:line 1013 at Epicor.ServiceModel.Channels.ImplBase.CallWithMultistepBpmHandling(String methodName, ProxyValuesIn valuesIn, ProxyValuesOut valuesOut, Boolean useSparseCopy) in C:\_releases\ICE\ICE4.3.100.0\Source\Shared\Framework\Epicor.ServiceModel\Channels\ImplBase.cs:line 962 at Epicor.ServiceModel.Channels.ImplBase.Call(String methodName, ProxyValuesIn valuesIn, ProxyValuesOut valuesOut, Boolean useSparseCopy) in C:\_releases\ICE\ICE4.3.100.0\Source\Shared\Framework\Epicor.ServiceModel\Channels\ImplBase.cs:line 941 at Ice.Proxy.Lib.RunTaskImpl.RunTask(Int64 ipTaskNum) in C:\_releases\ICE\ICE4.3.100.0\Source\Shared\Contracts\Lib\RunTask\RunTaskProxy.cs:line 63 at Ice.TaskAgent.Support.ServiceCall.RunTaskImplCaller1.<>c__DisplayClass4_0.b__0(TImpl impl) in C:_releases\ICE\ICE4.3.100.8\Source\TaskAgent\TaskAgentSupport\ServiceCall\RunTaskImplCaller.cs:line 47
at Ice.TaskAgent.Support.ServiceCall.RunTaskImplCaller1.Call[TResult](Func2 doWork, ExceptionBehavior communicationExceptionBehavior, ExceptionBehavior timeoutExceptionBehavior, Boolean isContinuousProcessingTask) in C:_releases\ICE\ICE4.3.100.8\Source\TaskAgent\TaskAgentSupport\ServiceCall\RunTaskImplCaller.cs:line 150
at Ice.TaskAgent.Support.ServiceCall.RunTaskImplCaller1.Call(Action1 doWork, ExceptionBehavior communicationExceptionBehavior, ExceptionBehavior timeoutExceptionBehavior, Boolean isContinuousProcessingTask) in C:_releases\ICE\ICE4.3.100.8\Source\TaskAgent\TaskAgentSupport\ServiceCall\RunTaskImplCaller.cs:line 52
at Ice.TaskAgent.Support.ServiceCall.ServiceCaller.<>c__DisplayClass49_0.<RunTask_RunTask>b__0() in C:_releases\ICE\ICE4.3.100.8\Source\TaskAgent\TaskAgentSupport\ServiceCall\ServiceCaller.cs:line 354
at Ice.TaskAgent.Support.ServiceCall.ServiceCaller.<>c__DisplayClass75_0.b__0() in C:_releases\ICE\ICE4.3.100.8\Source\TaskAgent\TaskAgentSupport\ServiceCall\ServiceCaller.cs:line 705
at Ice.TaskAgent.Support.ServiceCall.ServiceCaller.CallWithInvalidSessionHandling[TValue](Func`1 makeCall) in C:_releases\ICE\ICE4.3.100.8\Source\TaskAgent\TaskAgentSupport\ServiceCall\ServiceCaller.cs:line 715
at Ice.TaskAgent.Core.ScheduleProcessor.CallServiceAction(SysTaskRow sysTaskRecord, SysTaskParamRow companyParamRecord, ServiceCallArguments serviceCallArguments, Boolean isContinuousStartupTask)

Perhaps this is related to the use of System.IO… See this post here.
System.IO - #26 by Epic_Santiago

The could be related to some configuration changes to the cloud environment that support may not have been informed of perhaps.

It may also be that the code is not robust enough and was not developed to use a resilience library like Polly (funnily is there on the server).

Please don’t ask me about it… All I know it allows helps your apps work when things out of your control break, great for cloud environments. Here is a link https://www.pollydocs.org/

1 Like

This looks to be an error upstream of whatever the function is calling.
Could be many things.

Hmmmm…this seems to be a “them” problem, not a you problem. I’d ask support to check the health of your application server(s).

1 Like

Yes this happens to our processes (not just functions) periodically and cloud ops/support can never explain why. Sometimes I will come in and everything that had been running at a specific time just shows cancelled with similar errors in the logs. They never have an answer as to what they did. I created an automation studio recipe to email me when this happens so at least I know what I need to go rerun.

4 Likes

I think there is an argument for adding resilience to your code… Interesting I did used to see similar issues on prem when running a scheduled function on an agessive schedule, the function used system.IO incidentally

1 Like

Thanks for the pointers. I’ve asked the CSG team to provide more info on the use of System.IO in this case

This is an error I lived with in 2020 for a few weeks. It is a generic network died type of error. In my case it was a vpn connection using net.tcp and moving to as @Hally points out a more resilient https connection it went away. Assuming this is already an https call then it needs web type timeout guards and retries.

1 Like

My bet is on a timeout or uncaught exception in the code.

3 Likes