There’s a lot of talk about landing pages, filters, sorting, and performance with Kinetic grids in the browser. There’s a part of me that wonders, are we asking Epicor to do things the same way we always did in classic, or are we flexible enough to consider alternatives to get what we need?
I agree with @Randy that grids don’t like large datasets. Should they? Is it wise to pull whole datasets down into a browser and expect it to work like Excel? Epicor is hardly the first software company to have to work with large datasets.
Obviously, the best place to work on the data is going to be closer to the server. The ideal situation is the browser showing us only the records/summarizations/grouping we want while doing the work server-side. Many grid component companies claim to do this, including Telerik Kendo.
I have no inside information on how Epicor has implemented this, but even so, as users, we may need to rethink our expectations about working in a remote data situation.
If you can think of the data in the grid as your Working Dataset, then it’s obvious that you won’t be working with 4,500 records, and that the filtering should be done on the server side – then you’ll have fewer to deal with on the client side, and everything will work more smoothly!
The hard part is articulating and building all those filters such that you get all the data you need (nothing missing or hidden on accident), but not so much that it’s unusable…
I’m not so knowledgeable in client-server relationships etc. What does “filtering on the server side” actually mean? How do you do that.
Maybe as an example (and to confirm i have the right framing or not), you wouldn’t want a BAQ dataset in a grid to show every single sales order but only the ones relevant to the customer whose order you are in. You would add a filter (somewhere??), how do i know if that’s being done on the client side or server side, or how would you force it to be done server side?
I could be wrong, but my understanding of server-side vs client-side filtering is, respectively, filtering the dataset that gets returned vs filtering the results that are returned (to put it simply).
This would all be less exasperating with Kinetic if the filters actually worked as intended and were noticeably less buggy. In addition, we are doubly-stymied due to the unrelenting bugs in the back-end as we try to “solution” new ideas and program around the existing bugs (I keep going back to what a great idea of an update that simply fixes existing bugs that is in the IDEAS right now…).
IMHO.
Our in-house developer is brilliant and can program just about anything, but I’ve seen him lose his mind when layer (other) updates don’t take, corrupt, break, or simply don’t save and disappear.
Again, it’s getting better, I’ve noticed that. I have high hopes for this year’s updates.