We have been using Docstar (ECM) without IDC for the past 2 years. We are just now adding IDC and noticed the system requirements say that IDC needs to be its own separate server (with SQL DB) apart from the Docstar server we already have.
With only 2,000 transactions a month, is it really necessary that we have IDC on a separate server?
Just curious what other configurations there are out there. Is it really necessary to have 2 servers dedicated for Docstar?
@jdewitt6029 I donāt know if is needed since we are just getting started with AP Automation but our consultant was advised we also needed a separate IDC server for 2,000 transactions a month so I have ECM and IDC servers in dev and production.
@jdewitt6029 My personal opinion would be no you dontā need a separate server, but that also has a lot to do with how much work your DS instance will be doing. Do you have a lot of users in DS, a lot of workflows happening at the same time as AP Invocie import? Things like thatā¦
If youāre virtualized then itās a no brainer - add a second server - but if youāre on physical servers, you could try it once, then move it to another server if the performance is poor. Moving it wouldnāt not be hard.
We were under the impression that IDC needed its own instance of SQL Server, and we were going to run it on the same server as the IDC Application, but now being informed that IDC can reside on the same instance as our Epicor environments DBs, Enterprise Search DB, etc. and that IDC only does only does logging and store system settings. Can anyone confirm if that is correct.
I can confirm that - the IDC database is really only accessed to save the learned data. Otherwise itās load in on the services for OCR and processing. We have a VM for the IDC services and the database resides on our main ERP SQL server. Same goes for Docstar as it has itās own VM for services and image storage, but the database coexists on the ERP SQL Server. It really has very little traffic generally speaking.
92k docs right now, DS database is 3 GB used. IDC database is 325MB used.
Weāve got 40 GB of docs to import later this month, so weāll see what happens
Note - we have lots of fields on our content types, and my workflows pull back all that data from ERP so that all documents are equally searchable inside Docstar, so my index data is a LOT more than someone using it to store the docs for (and only really accessible from) ERP.
so we have a handful of doctypes inside ERP and really force the users to do everything via the ERP UI. This 40GB of stuff is from DocLink which we used prior to Docstar.
Then Iāve got another 20GB of stuff that is currently linked in ERP but using the āfile storeā storage method, so they are in a big directory of PDFs on a shared drive. Once I can get a query built that will find and rename them based on the XfileRef/XFileAttach tables inside ERP, I can bring them into Docstar using a workflow and reattach them back to ERP and drop the old references.
Ah, I seeā¦ I would imagine that someone from docstar has something created that does just that. Cause we are also using the file store storage and I would like to move it all so that we have one central document management software that we can train and adapt to.
They have a utility for doing it, but only ātheyā can use it. I know it works for DocLink (itās what we are using), but Iām not sure about any other system - and if I recall correctly, it wonāt work for the file store document either - thatās why Iām working on a script to pull the data out of ERP and a workflow to consume it all.
To import into DocStar, you could create an XML file for each document with the meta-data in it and drop the two files (original and XML) in the take-up folder. You should have the meta-data (of some kind) in DocLink, but if not, you could IDC your DocLink PDFsā¦
The query Iām building is pulling the metdata from the associated transaction for the attachment. The query will be done by ERP Doctype (quote, SO, PO, AP, AR, etc) but will create the XML/CSV file for import. Iāve found a few variations on the theme but I thoguth it might be simpler to use the script to pull the key field data (company-type-transaction#) and rename the file - then using the workflow to do all the work with a metadata datalink back to ERP, then an attach to ERP step. Or maybe even use the File ID# or GUIDā¦ Iāve almost got it all where I want it
I donāt know Powershell well enough, but I think it might work if it could do a SQL query - like call the cmdline SQL utility, then it could simply recursive parse through my shared folder, looking up the files and renaming them and dropping them into the take-up folder. But Iād also want it to call the API to ādelete attachment by ID#ā as wellā¦ that way it would be self-cleaning, one file at a time (in case it crashes).