HELP. PLEASE. 56 records, 24 hours, and still only 98% completed. And the result has posted behind the progress bar. WTF is going on? This document usually composes at a rate of 0.7 records per second. This is my second try with the same result. Closed ID between attempts. I paid my money and expect an official response. I posted this following a link labelled "contact us." Adobe does not monitor the World Wide Web 24x7? How do foreigners get tech support?
I still have a live unresolved issue whose defining characteristic I have yet to discovered.
I have spent hundreds of hours inventing workarounds and optimizations to get an entirely non-exotic data merge to work involving only two documents.
I say non-exotic, because I can bring to mind hundreds of documents I have seen in the world that are far more complex, and nothing in the marketing or documentation suggests InDesign is limited to small-scale projects. Even if it were, there is plenty of room to argue my situation is not large or massive in scale, but no greater than medium.
I have ruled out the possibility of my needs being RAM constrained as Task Manager indicates significant memory headroom at all times. Nor am I processor constrained in a way that is avoidable as most of my cpu's are not being assigned tasks. Though some are very taxed. Basically, only about two processors out of 32 are doing much. Total processing load hovers at 5%. I have plenty of workspace on a RAID 5 array and have even tried running the entirely project of a 10gb ram disk that held all source files and was designated as the scratch disk. This provided little benefit, proving the problem is not disk intensive. The process is clearly cpu-intensive, but minimally multi-threaded.
This is what I have discovered so far:
(1) Mail merge was not engineered to the prevailing standard of quality at Adobe.
(2) Indesign cannot handle moderately large datasets. By this I mean even a data set of 5,000 records takes along time to process. 11,000 records takes 2;11. And 20,000 records, before some image size reductions that may have impacted results, once took over 48 hours before I aborted the session. This seems very counterintuitive. It takes 0.7 seconds per record when handling a quantities than seem manageable to ID. Unmanagable sized datasets seem to resulted in non-linear growth of processing time.
(3) Preview multiple record layout will corrupt a document making it useless forever with no hope of repair if one mistakenly saves it. This has been true for many years. While Adobe might argue they have de-prioritize a fix due limited resources, I cannot imagine an justification for why they have not simply removed that checkbox from the merge dialog box.
(4) Data merge placeholders are often longer than the data they contain, such that while the preview may be intelligible, a view of the placeholders may have them flowing out of the bounding box. In other words, they may be completely invisible.
(5) If a placeholder exists for data that is not contained in the data source, ID throws an error but does not call out the offending placeholder by name! Combined with the above, if a record has 50 fields, it can take an hour to locate the source of the problem when it should take only a second.
(6) Data placeholders do not reflect what data type they are. So I might create a source document, then place a dataholder called Employee that contains a name and one called @Employee that contains a picture, but then be startled to find both display as <Employee> in placeholder view. Using longer names to distinguish both is not always convenient because of point (4).
(7) If a cross reference on a master-page is linked to data on the page containing placeholders, it will be locked at the value of record 1 in some cases, instead of being variable. As I recall the problem manifests when merging to PDF, not when merging to another document. But this is very unsatisfactory, because of the next problem.
(8). InDesign will merge to a PDF in 10% of the time it takes to merge to a new document and then save that document to a PDF. That makes no sense.
(9) While I cannot find evidence of a memory constraint, as I never run out of memory, I believe I have found a database limitation, presuming that InDesign has some internalize database to which the merges data is added. When InDesign does encounter this constraint, instead of throwing an error and aborting. It proceeds to a conclusion and then fails to display any merged result. This can waste hours of time. How does ID not understand a hard-coded limitation?
11) When merging too large a data set but not encountering the above constraint, the merge fails in subtle ways, creating records that look well formed, but in which any given record contains field values from differing records.This is very difficult to detect without manually cross checking the output against the input. But this voids much of the benefit of having automated the merge. Sometimes one field gets stuck and its value is imposed on all subsequent records. This is easier to detect.
(12) Sometimes InDesign gets stuck in Preview mode and even a reboot does not result in availability of non-preview mode. This is not the phenomenon described in the warning presented when saving in preview mode. It says closing and reopening the document will restore non-preview mode. This makes debugging data merge placeholders nearly impossible.
(13) If a placeholder adjacent to another placeholder has different character formatting than the subsequent placeholder, in situations where the first placeholder has a null value, the second placeholder inherits the character formatting of the first placeholder. The workaround is to insert a printing character that has paper colored fill in between the two placeholders (I believe a non-printing character does not work).
(15) Data sets with larger numbers of fields seems to result in some fields getting stuck and that value being propagated to later records, thought only when merging to a PDF. This is not precisely the same as problem (11).
(16) After having figured out all of the above, I ran the same document with what I consider to be an wholly equivalent data set, but which contained only 56 records. Yet it is only 98% merged after 19 hours.
Is it possible InDesign is sensitive to how many files are in the source directory? While any given merge may include as few as 50 different images from a library of 15,000, I need to have all 15,000 in the source directory to avoid complicated issues of managing subsets that have ever changing boundaries.
What I would like to do is grant an engineer remote access to my machine to study the exact particulars of this nightmare.
PS: For the sake of completeness, I will also note the following issues, though not all are actual bugs:
(1) When seeking to merge records to a single page, you only layout one record and data merge replicates it. This confuses more people than any other issue. Previewing this single record is OK, but ...
(2) Previewing multiple record layout will corrupt your document. The only solution is to copy the contents to a new document. The corruption will not follow.
(3) You cannot put placeholders on a master page. But you can move the data there via cross-references. Make the data on the non-master page transparent to hide it.
(4) You cannot create a true multi-page record, but you can fake it with effort by laying out both pages on one sheet, even if you intend them to print them in duplex. Search my other postings for details.
(5) Preface images field names with @ and data you want QR coded with #. In Excel preface @ and # with a single quote '. The single quote will be stripped out automatically when saved to CSV. Shame they don't suppport UPC coding since commerce is wholly dependant on same.
(6) Place commas that are part of data within double quotes to prevent misinterpretation as a delimiting comma in CSV files. If you are sourcing from Excel, i believe Excel will manage this for you.
Hopefully my misery has produced something of value to the community ...