DTP tool recommendations

My documents are starting to "stress" FrameMaker. I can either reduce the complexity of those documents () *or* find an alternative that is better able to handle their complexity.

I looked at lyx and scribus and found them terribly "lacking"; like 20-25 years too little, too late!

Things like MSOffice and OpenOffice are totally inappropriate as they are more oriented to "word processing".

I'm not keen on returning to the dark ages of TeX/LaTeX/ntroff/troff/etc.

Any other suggestions (Mac/Windows?) with which folks have had some reasonable success with *complex* documents? (e.g., hundreds of figures/tables/cross references/foot notes/etc.). The scribus/quark approach is too much geared towards paste-ups. I'd prefer something where editing/composition *can* be done directly in the tool AND don't want to have to explicitly invoke a "preview" mode just to WYSIWYG.

I'll start looking at current offerings (including newer FM versions) as well as rolling back to OLDER offerings (which seemed to have LESS problems handling big documents than more recent tools!)

Reply to
Don Y
Loading thread data ...

You seem to be dismissing many good tools out of hand. Certainly when I think of figures/tables/cross references/foot notes/etc., then LaTeX is the first thing that comes to mind. And WYSIWYG is the /last/ thing I would think of. The worst thing you could have is a tool that encourages fiddling about with appearances as you work on the document, picking fonts from a ribbon bar or changing spacing by adding extra blank lines.

When writing complex documentation, you need to have a system where you have a strong separation between the content and the appearance or layout. Content should be written in plain text, plus other files for pictures, code snippets, or whatever. This is the fastest and most efficient way to write the text, has the best compatibility with your version control system, and lets you concentrate on the right part of the document at the time. It lets you organise your source as you see fit - all in one file, or files per chapter, or whatever. Some documentation may be generated automatically in some way, other parts may be shared across a number of output files.

It lets you write documentation where you get what you asked for - not where you get something that looks at first glance to be what you expected.

Turning this all into a nice pdf (or possible html) should be mainly automatic, once you have chosen the settings, formats, styles, etc. At most, you will want to make a few touch-ups - fixing a bad page break, re-arranging some text to make a line break look better, etc.

I know of no system better than LaTeX for this. Sometimes it is a bit fiddly, and some features can be hard to learn - but you get top-quality output with least effort in (when you are talking about big documents, the learning effort is a minor issue - it saves overall). There are vast numbers of additional packages - for example, when making software documentation, there are packages for formatting code snippets. And of course it is programmable so that you can automate tasks. I've made documents where the main part of the document described varies API telegrams - and the summary tables in the appendices were generated automatically from the information in the main content.

Now, there may be other ways to get this. Other systems such as DocBook have several similarities. There are other simple markup languages, and you can use macro preprocessors such as m4. But nothing else has the flexibility, tool support, package support, or community support to compare with LaTeX.

It's your documentation, and you will be writing it - you need to choose the tools, and you might have good reasons for disliking LaTeX. But to dismiss LaTeX as "dark ages", and to lump it together with troff, is ridiculous. It makes it sound like you simply don't understand what it is, but have merely heard that it is a text-based markup language that is difficult to learn. Certainly when /I/ am writing complex documentation (or even small documents), LaTeX is my first choice.

Reply to
David Brown

And that can be the tail that justifiably wags the dog.

Reply to
Tom Gardner

Yes indeed.

LibreOffice can do a reasonable job of showing the differences between two files (I presume MS Word is okay too, but I don't use that). But many other programs can't show what's changed - your version control system simply shows that /something/ has been changed, relying on the log messages to give information.

Another point is that I can take LaTeX files that I wrote in the 1990's, and still read them, generate new output files, etc. They need manual updating to use newer features, but the source of the text is all there. With closed format binary files, you are at the mercy of the software supplier for compatibility, new versions, support for different OS's, etc. And while in theory, many of these formats are XML and therefore "text", they are actually just incomprehensible binary blobs stored as sort-of ASCII XML. (docx is a perfect example.)

Reply to
David Brown

Turd documents have the advantage that comments/questions can be added to conveniently enable conversations.

In a corporate environment they used to (still do?) have an advantage that they often invisibly contained previous edits. If anything controversial came my way, I would load it into WordPad and sometimes I could see things the PHBs had been considering but didn't publicise.

It has always amazed me that people don't understand the difference between syntax (e.g. XML/XSD) and semantics, especially where obfustication is possible.

Reply to
Tom Gardner

I'm not concerned with "writing"; if I was, I'd be using a "word processing" program. The issue is document *layout* -- hence "desktop PUBLISHING" tools.

So, I want to see what each font change, figure insertion, table reformatting, etc. operation does to the *visual* presentation. Is it better to insert a two column portrait page and flow this (400 line!) table over those two columns? Or, a three column LANDSCAPE page? Which makes it easier for the reader to read and locate the information of interest?

I want to POINT to a place in a source code fragment and insert a footnote and KNOW where the accompanying footnote text will appear. Or, point *in* a callout in a figure and insert a cross-reference anchor that I can later reference ("See Figure 23 on page 15" vs. "See Figure 23" -- based on how "far" the reference is from the anchor; can you "imagine" how many pages of text exist between two arbitrary points in a TeX document "source"?)

I want to insert a graph, then insert an *inset* that magnifies some portion of that graph (e.g., point of maxima) and add a callout that identifies the actual point of interest to the accompanying text.

(No, I don't want to have to open up the original graphing tool to concoct a *new* graph to insert, there; why? the existing graph already has the information I need... I just need to pan and scale to draw attention to it!)

I want to insert a sound clip and know that the "player" will appear on the same page as the text describing the characteristics of interest in that sound clip (so you don't have to flip pages to read about what you are listening to).

I want to see TWO GLYPHS instead of \theta\schwa (so I can recognize when I mistakenly type \Theta, instead!).

I want to tweak the kerning or spread of certain "words" to increase their legibility. E.g., I made a cheat sheet, last night, that used a "seven segment" font. But, its normal spacing had character sequences like 78 without enough space between them to differentiate the two characters; drag mouse to select, type "10%" in the spacing dialog: "Hmmm.... maybe 15% would be better?"

If I want to revisit some aspect of the document, I want to just drag the mouse over it and click on "RED" (if I adopt the convention of using red highlights in lieu of "FIXME")

I'm not keen on having to repeatedly render 60 pages of document just to see how some *portion* of it will look.

As I said, the structure of my documents is stressing FM's capabilities. The 64 page document I'm working on presently, already has 13 tables and 59 figures. And, numerous cross-references between them, footnotes, etc. Yet, the document (PDF) is barely more than 400KB.

Yes. I use a text editor to write my code. I use a compiler to verify that it works. I redirect output to a file. Then, import all of this and "make it look pretty". How is LaTeX going to make this easier than a WYSIWYG tool?

Likewise, I use drawing and graphing programs to make illustrations. Then, import them into my final document and annotate them as appropriate. How is LaTeX going to make this any easier?

I use CAD and EDA tools for technical illustrations. What value does LaTeX have in placing and annotating them in a document?

DTP is concerned with the *publication*, not the "writing".

No. That's only the case with documents that are primarily text. Words, phrases and sentences are pretty small and can be finely subdivided -- you can break a sentence at an arbitrary point and let it flow onto the next column or page.

You can't do that with a 3 column inch figure. And, you may not *want* to do it with a 6 column inch table! Because the goal is to make the information you are presenting "most comprehensible"; not just try to cram it on as few pages as possible.

When you put lots of figures and tables in a document, tweaking the layout becomes a much bigger concern -- else you end up with pages that have a few lines of text amid an otherwise BLANK page because the *figure* that followed the text had to move to the NEXT page and letting anything else fill that space leads to other layout problems (like a set of tables with the accompanying text being separated from it)

Such documents look embarassing. And, you can't usually just "fiddle a thing or two" to make them behave. Often, you have to rethink how you present the material (e.g., maybe I can cut this table into two separate tables so they can be better "fitted" to the places they currently reside on the page(s)).

E.g., the enumeration of the "rules" in my TTS's would consume a boatload of space if typeset as "regular text. Knowing that they need not be examinable in detail at "100% scale" (you can zoom with an electronic display!) lets me typeset them at a much smaller point size -- so the table only takes *5* pages in the document instead of *10*!

Of course, I'd NOT want the rules for 'T' to span a page break. So, by seeing where those rules fall (or, maybe 'S'?), I can ensure they are presented in a manner that will facilitate their comprehension (no need to flip pages to see how rules T1-T3 on a recto page interact with rules T4-T8 on the following verso page: maybe I should arrange for the table to START on a verso page if that will allow it to finish on the following recto page -- and thus be visible in its entirety!)

Here's one of my gesture templates: 0 0 moveto 60 60 60 -60 120 0 curveto Do you REALLY think I created it by typing those characters and HOPING it turned out the way I expected? Or, type that and then render it, then tweak it and rerender it, etc.?

Rather, I used a graphic program to DRAW what I wanted (WYSIWYG) and then algorithmically extracted those parameters from its representation of the "drawing" (along with the other "drawings") for *import* to the documentation.

Likewise, I extract "content" from my documents and *export* that into my sources. E.g., my TTS rules are expressed in terms of IPA glyphs. The tables in the software are created by extracting the tables from the documentation (the beauty of MIF is that it is well documented and portable). So, I only have to worry about the documentation getting out-of-sync with the codebase if a developer fails to update the documentation and MANUALLY tweaks the tables generated from an earlier document.

Do you think I prefer writing: This is \emph{emphasized}! instead of typing it, highlighting "emphasized" and clicking on the "emphasis" character format to *magically* see it rerendered in whatever visual representation I've chosen for "emphasis"?

And, what happens when I type: This is an \emph{emphasized\par list}

(Ooops! Don't worry, you'll HOPEFULLY catch it when you render it!)

Reply to
Don Y

When you send someone a pdf created by LaTeX, they can add comments there. (The same applies to using LibreOffice and generating a pdf instead of giving people the .odt or .doc file.) This has the big advantage that you are never tempted to use the file they send back with "corrections", that generally screw up all your efforts in making a nice document because most people don't know how to use styles properly.

I am not sure I would classify that as an "advantage", but certainly that used to be an issue. Word had a "fast save" feature which did not actually remove text that had been deleted or changed since the last save, and it was often possible to recover old text. It's the same principle as recovering data from "deleted" files on Windows.

Agreed.

Reply to
David Brown

I found that to be a pain in the posterior with the DTP tools I used. Sure, I could easily change a paragraph, but for reasons that seemed to vary with phase of the moon, I rarely seemed able to propagate all such style changes correctly through an entire document. I know I'm incompetent, but that was easy in LaTeX.

I would have thought that was possible with multiple passes through LaTeX; certainly it is a pain if adding the extra words inserts a new page. And that was in any tool I used, but I'm sure you are pushing the tools harder than I needed.

That is a common requirement.

Agreed, but I can tolerate that, especially since it enables simple layout of complex formulae.

I can search for all instance of FIXME; less easy to search for a red highlight.

Swings and roundabouts. I prefer telling it what I /mean/, and having the pixels positioned behind my back. I always split large documents, so "60 pages" is not necessary.

No tool is perfect; swings and roundabouts; tradeoffs; personal preferences.

Reply to
Tom Gardner

Hi Don,

I don't want to have any arguments here - this is /your/ requirements, /your/ preferences, /your/ workflow and /your/ decisions. All I can do is say a little about how /I/ work with complex documents, and make comments if I think you may have made factual errors or misunderstandings, or simply are not aware of particular features. So I've put a few comments further down - you can agree with them or disagree with them as you want.

Your post here does give more detail about what you are looking for, and may help others give you different recommendations to consider.

With LaTeX, it is usually a quick matter to change a few package options at the start of the document, and everything will be re-flowed to suit. A key point is that such changes are consistent throughout the document (except, of course, if you don't /want/ them to be consistent - you have full control).

Another point is that this sort of thing is independent from the content. In a typical WYSIWYG program, you have a single "undo" stack - changes to layout and changes to text are merged. So you if you have made some formatting changes, then edited the content, then want to undo the formatting changes, you have a lot of difficulty.

Such things can usually be expressed without problem in LaTeX. And using macros, you can add checks to be sure that you've got it right.

With LaTeX, you get that sort of thing automatically. Use the "varioref" package or something similar (there are various options - choose what suits best). Then references will say "on the previous page", "on the facing page" (for two-sided layouts), "on page 23", "above", or whatever makes sense at the time. You only have to write something like "\vref{fig:pictureOfMainBoard}" to get the reference you want, automatically adjusted for position.

I'd be tempted to automate that sort of thing with a bit of image magick, scripting and makefiles. Then I would be able to make changes to the original graph and know that all derived information is updated appropriately.

LaTeX also has packages for doing some image manipulation - I don't know if they support something like this.

But I won't claim it is an easy or obvious thing!

There are packages that support that.

I don't see what you mean. \theta\schwa will give you two glyphs - ?? \Theta will give you one glyph, a capital theta ? . And with LaTeX, because it is all written as plain text, you can see /exactly/ what you are getting. You need much more effort to check the visual difference between the lower-case and upper-case thetas in a WYSIWYG system (at least, you do with the unicode font I am using here).

You can do that with LaTeX. And you can make macros to automate it all (if you can't find a package or font that does it already).

I use a "\todo" macro that highlights the section in the margin, puts a table of "todos" at the end of the document, and gives me a warning at the end of the pdf generation so I don't forget about them.

And that is why LaTeX has full support for rendering only parts of a document - while getting all the cross-references and things as correct as possible.

LaTeX generated pdf's are small and efficient. And 60 pages is not big

- LaTeX will happily work with many hundreds of complex pages. (Clearly, bigger documents take longer to render - you will want to split it up while writing it.)

With LaTeX, you don't have separate steps for importing the data into your documentation system, then making it pretty. Your LaTeX document refers to the external files - it imports them as code listings, figures, tables, "verbatim" blocks, or whatever suits your needs. The prettifying is done automatically. And this way, when you are half-way through the documentation and then there is a software change resulting in a slightly different output, you don't have to go through the whole process again.

If the drawings are made externally, then the process is the same. The only difference is that the imports are part of the build process in LaTeX, not a separate step.

That depends on the tools you use. Some tools have LaTeX support, so that you can write your text in LaTeX in the drawings, and have it rendered in the same fonts when building the LaTeX pdf. You can even get cross-references and footnotes to work if you like. Of course, this only applies to tools with such support - for everything else, you include the illustration "as is".

LaTeX is described as "a document preparation system". It is not a writing tool, but a system for laying out documents. The emphasis is different from WYSIWYG DTP programs, but it is well suited to most publishing work.

Automating as much as possible with a good tool makes the whole job easier. But no tool will do the impossible.

That's why LaTeX gives you commands to help, such as letting you say where you want your tables and figures to go, or to give preferences (such as "put it /here/ if there is space, or at the top of the next page otherwise" or "if there is not room for at least two lines of the next paragraph, insert a page break to make the layout nicer").

And with LaTeX, such things would be done by using a macro to set the size of the text for these "rules", so that you can easily change it in one place in the source file and have everything adapt appropriately.

Well, /I/ certainly prefer writing it that way. But I can't answer for what /you/ prefer.

And while it might be okay to use point-and-click for italics (or bold, if you prefer), it's a much bigger issue when you want to use different fonts for something.

For example, if I am writing a LaTeX document that has lots of keywords in it, and I want them to stand out. At the top of the document, I have a macro:

\newcommand{\keyword}[1]{\textbf{#1}}

Then in the text, I can write about \keyword{class} or \keyword{int}. If I find that keywords would look better underlined, or in italics, I can change it in one place in the document. I don't have to go through the document modifying everything. I don't have to worry that in one place I might have clicked on a keyword and changed its "style" (which can later be modified), and on a different keyword I accidentally used direct formatting changes. If I decide to change the background colour for the keywords, I don't have to worry if I have been careful to avoid including neighbouring spaces when pointing and clicking.

And I can write this far faster with a little bit of markup language, than with jumping between menus, ribbons, drop-down lists, and so on.

Mistakes happen.

Certainly some mistakes in LaTeX are going to be missed until render time, and mistakes such as missing brackets can be really annoying. That is a disadvantage of "what you see is what you asked for", rather than "what you see is all you get".

Reply to
David Brown

You have to be disciplined to used paragraph and character "tags" consistently. E.g., every "foreign language" word/abbreviation that I use (etc. et al., i.e., e.g., ...) is tagged with the CHARACTER tag "foreign language". So, they all appear in italics. Every time I want to EMPHASIZE something, I tag it with the character tag "emphasis". Titles of publications (that I may reference in the body text) are tagged "Titles". etc.

So, if I want all Titles to be underlined and displayed in "small caps", I just change the definition of the "Titles" character tag and they all *instantly* appear as such.

For example, one way I chase down errors in tagging things that *appear* the same (e.g., assume "emphasis" and "foreign language" both render as "italic") is to temporarily change the COLOR of one of them and then quickly parse the document: "Gee, this 'etc' is in italics, but, it's not in BLUE italics like all the other 'foreign language' instances of 'etc'. Let me move the cursor into the word (click) and see what the status bar says... Ah! It's tagged as 'emphasis', incorrectly!"

This also makes it easy for me to extract data from a document. E.g., tag all of the variable names with "variable" and they are readily apparent in the MIF file. This, for example, is how I extract the contents of "table 12" and know which columns contain IPA "sound" glyphs vs. the ASCII text that generates them.

[I could have designed the tool that does this to look for "a table having the title 'Phonetic Ruleset'" but opted, instead, for "table N"]

What's it "cost" for each pass? More than a second? An extra keystroke/mouse click? I put together some little documents in lyx and tried rendering them to WYSIWYG. The result was unbearably slow ("I have to do this EVERY time I want to see the effect of my changes?")

Instead, I insert the "mouse cursor" in the "thing" that I want to reference then refer to it WHERE I want to reference it. I'll "see" the consequences as they happen; no need to run a "post-processing step" to make them visible.

[That causes you to make several changes before undertaking that "rendering" step. Do you remember *every* item that you changed? And, *where* you changed them? I see the changes as I make them: "Crap! Injecting those few words -- see figure 23 on page 12 -- caused that table to end up on the NEXT page (for the sake of that additional one 12 pt line of text). Maybe I can steal a few words out of the paragraph that it occurred in to bring things back the way they were..."

If you are dealing with just "lots of text", then these small changes (adding a phrase in place of a cross reference expansion) don't happen too often. They get absorbed in the little bits of vertical whitespace present on most pages.

OTOH, if you have lots of larger objects (tables, figures), then a small change can result in an ENTIRE large object moving. Suddenly, half a page has shifted because of one line of text (or even "a few words of text"). That, in turn, has an increased chance of causing something else to shift, etc.

If you try to keep the text associated with each table/figure proximate to the table/figure (e.g., "In the table, below..." instead of "In Table X on page Y"), then the text won't cleverly fill in the large voids created by the movement of those "large objects". So, you get pages with big empty spaces caused simply because the tool can't satisfy your layout requests with the space remaining on that page!

In these cases, I have to artificially add -- or elide -- stuff and watch how things re-flow to create a more appealing publication. "Gee, if I introduce a FORCED page break, here, then this page will be just a little bit 'emptier' -- but, the following page won't be a large expanse of whitespace!"

[Of course, when I later revise the document, the first thing I have to do is search and replace "PAGE BREAK" with "" in the regions that I'm updating as my new changes may make some of those redundant; or, even cause them to *worsen* the layout!]

I can generate the equation directly *in* the form that it should appear. And, see how it will look on the page as I am creating it. ("Gee, this is too wide for the column; let me set it in a text box that *straddles* the columns, at this point, and watch the text flow AROUND it")

FM can actually do algebraic reductions. So, you can show a derivation of a solution without having to manually type each step of the reduction! So: a c - + - b d

becomes: ad + bc ------- bd

with a mouseclick. Or: e + 7pi

becomes: 29.380205

while: d/dx (2x^3 - 8x^2 + 5)

becomes: 6x^2 - 16x

Sure, it won't replace a true symbolic math tool -- but, it makes it a lot easier to "check your work": "Hmmm.... my derivation reduced to 'x + pi/4'. Why does it NOT agree? Have I typeset the equation incorrectly? Or, was I sloppy in my reduction?"

Likewise, having some drawing tools *in* the DTP program means I don't have to keep jogging back and forth between a "drawing program" and the DTP program. Some of the LEAST desirable results have come from being forced to do things in an external program that then limit how that object can be manipulated in the DTP program.

For example, labeling the axis of a plot generated externally means that if I want to enlarge or reduce the *plot*, all of the associated text labels end up enlarged or reduced as well. And, in some "font" chosen by the plotting program (which may clash with the typefaces used in the publication).

I can search for all occurrences of FIXME (ignoring or respecting case). I can also search for all occurrences of a particular character/paragraph tag APPLIED to some text. So, /FIXME/ shows up but not *FIXME* (italic and bold, respectively).

Or, "anything tagged with the FIXME character format" (in my case, visible as "red")

Should I split the document into multiple chapters?

"Here are the commands that you add to the diskless system to support the various internet daemons. In the next chapter, we'll list the commands and configuration files that are used to support the creation of file systems. And, the chapter after that we'll list the commands that are used to prepare the manual pages." So, now I have to process a *book* instead of a single document: "Building a diskless workstation from scratch".

FM has been exceptional in meeting my goals, thus far. It's quick, relatively bug free (i.e., I've learned how to deal with the bugs that remain) and produces excellent quality output. I can easily generate PDF's directly from the "source" -- where it will export particular paragraph tags as chapter/section headings, etc. in the PDF (i.e., include all Figures, Tables, Major Headings and Minor Headings in the contents of the PDF -- but, nothing

*below* that level of detail)

But, I *use* the features that it makes available. I shouldn't have to be worrying that I've introduced too many cross references. Or, a table with too many columns -- possibly containing footnotes and cross references; or, cross-referenced *text* that is automatically inserted from other places in the document! E.g., "See Figure __ on page __ titled _____" -- where these blanks are filled in by chasing down the cross reference that is indicated "here" (and, of course, that can move Figure __ to a different page if this extra text happens to reflow the position of the frame containing that figure "down the road").

One would think this would just increase the amount of (virtual) memory that the tool requires. But, it appears to have other implementation problems that make it more difficult to evaluate the consequences of "adding structure" to the document.

Amusingly, there were DTP tools that I used ~30 years ago that seemed to handle this better -- as they were constrained to live in a 640K environment! I.e., more actively managing "far references".

I note that printing white text on a black background is not possible (except in a table cell, etc.). Lots of other little tricks that progress seems to have denied me... :<

Reply to
Don Y

Sounds like you need Word.

--

John Larkin         Highland Technology, Inc 
picosecond timing   precision measurement  

jlarkin att highlandtechnology dott com 
http://www.highlandtechnology.com
Reply to
John Larkin

I've snipped most of this...

Yes, that can be done with WYSIWYG tools. But it is a serious PITA to do, compared to writing tags in text. I can write \emph{emphasis} without taking my hands off the keyboard - but to change a word to use the "emph" style in a WYSIWYG program means selecting the word, jumping to menus, ribbon bars, or dialogue boxes, and finding the right style from a list. It greatly disrupts the workflow. So when I am using something like LibreOffice, it is rare that I use styles other than things like header styles that have keyboard shortcuts - it's simply too much bother.

Lyx is not LaTeX. It is an attempt to make a WYSIWYG editor which runs sort-of LaTeX underneath. This gives you some of the advantages of WYSIWYG, and some of the advantages of LaTeX - but also some of the disadvantages of each. Some people find it suits them, and that's fine

- to me, it is a poor compromise and I have never seen it as being useful other than as a gentle way to move people from word processors towards real LaTeX.

And one thing Lyx does particularly badly is rendering speed of large documents, because it tries to do /everything/, all the time.

I like getting LaTeX to do that sort of thing automatically.

Then you really want a system that will do it automatically.

Now you really, /really/ want a system that will do it automatically.

I prefer to do my maths separately using a mathematics program (or by hand) - I don't want my documentation program to fiddle with my mathematics. And any decent maths program will export in LaTeX format to make life easy.

I don't even need to search for the fixme's or todo's - my IDE can list them nicely. And I can "grep" them.

If it is 60 pages long, then yes, you probably should split it. How you split it is up to you and depends on the document. The split into multiple files should correspond to the concept of "this is the bit I am working on a the moment", and fit a reasonable size to let you render it quickly and see it conveniently in a previewer. It may also correspond to splits between different people working on the same document. Complex documents are more likely to benefit from splits than simpler ones - there is no hard rule about the sizes. But you definitely want a tool that can work smoothly and conveniently with split files.

Whether or not that matches visual or organisational splits in the final document is irrelevant.

If your 60 page document does not have some sort of organisation of "parts", you have a bigger problem than just your choice of tools.

Reply to
David Brown

(Please learn to snip!)

Did you miss the part where he said he had complex documents of 60 pages, that are causing FrameMaker problems? MS Word would have keeled over and corrupted his files long before that - it is a brave (or ignorant) man who tried to get Word to handle more than about 20 pages where there are tables and figures.

Reply to
David Brown

You don't tag each thing AS you write it!

E.g., I typically don't bother applying the "Foreign Language" CHARACTER tag to "etc.", "i.e.", "e.g.", etc. I *know* that I can safely, later, say "search for all instances of "etc." and tag ALL of them with the "Foreign Language" tag.

OTOH, I might have a "variable" called "index" that I might want to tag with the "Code" character tag. But, this "combination of letters" might occur in a non-Code context. So, a blanket search-and-replace runs the risk of incorrectly tagging some "body text" as "Code". depending on the nature of the pattern sought, I may opt to *incrementally* search and replace (so I can review each instance encountered and decide to apply the "replacement tag", or not). Or, I may manually walk through a document double-clicking on words (which selects the entire word) and then clicking on the tag name.

For *paragraphs*, it is easier to come along after-the-fact and highlight the paragraph(s) of interest and then click on the appropriate PARAGRAPH tag. For example, the "command snippets" in my diskless workstation document are easily located, visually. Click ANYWHERE in the first line of a "command". Hold shift depressed and click ANYWHERE in the last of the (contiguous) lines of commands. The characters between the first and second click are highlighted -- which will NOT include all of the characters on the first and last *lines*!

But, clicking on a paragraph tag applies that tag to the entire paragraph(s) regardless of how much of each paragraph is selected. E.g., I could click on the space after "applies" in the sentence above; then shift-click on the second 'r' in "regardless" in that same sentence. And, both of those lines (assume a line is a paragraph) would be tagged in their entirety!

For example, the dmesg(*) output that I mentioned previously can be clicked once SOMEWHERE in the first line; then, again,

*somewhere* in the LAST line. And, the entire first and last lines and all intervening lines would receive the applied tag. [each LINE of the dmesg output is actually a separate paragraph because it contains a hard newline -- instead of an *implied* newline/line wrap]

Regardless of how much -- or little -- it renders, it still requires a separate step/action to render the "input" to its "typeset" form. FM (and other DTP programs) don't add this step.

FM will automatically MOVE the table to the next page. What it can't do is decide to ELIDE text so the table doesn't move! And, I challenge you to show me how LaTeX can do that!

XXXXXXXXXXXXXXXXXXXXXXXXXXX I want this text to fit on no more than two lines that are limited in length by the X's above and below. XXXXXXXXXXXXXXXXXXXXXXXXXXX

You can't WRITE/compose the text with foreknowledge of how much space it will ultimately require on the page. And, no algorithm can know how to massage the text to MAKE it fit!

(should it shrink the text to 4 points? should it elide every third character?)

Instead, a human mind has to examine the text and decide if it can be rewritten/rephrased more tersely. Or, if the original goal should be abandoned and a NEW goal established AFTER letting the text move onto the next page:

XXXXXXXXXXXXXXXXXXXXXXXXXXX Fit this on no more than 2 lines denoted by the X's XXXXXXXXXXXXXXXXXXXXXXXXXXX

I've got two tables that are 65 lines tall. The page can tolerate

66 lines of text. The first table is preceded by two lines of text. And, FOLLOWED by two lines of text.

If the table is treated as an indivisible object, then the first page of the document will have two lines of text on it. And, nothing else. The second page will have a 65 line table.

The second page *might* also have one of the two lines of text that follows the first table -- with the second line appearing on the THIRD page, along with the second 65 line table.

Or, the second page may have JUST the table with the two lines of text on the THIRD page. And, the second table on the FOURTH page.

Looking at this (the huge BLANK areas), I might opt to use a smaller font in the first table so the initial two lines of text, the first table AND the following two lines of text appear on the first page; with the second table appearing on the second page.

Or, I may contrive to remove a line or two from the table.

It would be foolish to allow the first and second tables to "float" to "wherever they fit" and allow the intervening text to coallesce as appropriate. I'd end up with a two-line paragraph followed by another two-line paragraph on page one. Then, a page two that was just the first table. And, a page three that was just the second table (i.e., the reader is presented with two pages of tables without any explanatory text proximate to them)

By comparison, it I have lots of *text*, the layout manager has lots more opportunities to chop up that text: I'll put the first 5 lines of this paragraph on page 1 and the last 3 lines on page 2.

[In a pinch, I can let a table be split; that's not possible with a *figure*!]

See above.

The value of the feature is in how it automates and checks derivations that you are already preparing to illustrate. E.g., if I want to distribute 2 over (a+b) in 2(a+b)-b, I can highlight "2(a+b)" and have it appear as "2a + 2b". Then, I can associate "2b - b" to get "2a - b".

If this is NOT what I end up with, then I have cause to reexamine my pen-and-paper derivation.

The whole point of having these capabilities IN the DTP program is so you don't have to keep moving back and forth between programs.

E.g., imagine having a photo of a human head that you want to annotate with tags: "eyes", "ears", "nose", "mouth". Doing this in a drawing program means you have to ensure the same "font" is chosen in that drawing program as the font that you will be using in your document (if you change your min in the document, you have to revisit the drawing program to update the font selection, there.)

Ages ago, separate programs handled tabular data -- so, tables were "objects" that could be imported into documents. The same sorts of issues apply: what if I change the typeface in my document? Or, want negative values in the table to be displayed in red? Or, highlight particular entries (totals, subtotals, etc.)?

Putting the BASIC operations for these sorts of things into the DTP program let you make these sorts of tweeks without revisiting the original "tool".

For example, I have documents that include many schematics and schematic fragments. If I want to change something in those schematics (like the font that I used to label the signals), I am FORCED to revisit the original program to make those changes. Even though the schematic itself is not changing -- rather, just some PRESENTATION aspect of it IN THIS CONTEXT.

I'm not writing code. There's no IDE. If I need to remember to fix a photo that I've included in my document, I can just "mark" the photo (set it's background to RED). Then, flip through the document -- in its PRESENTATION FORM -- a page at a time looking for "red things".

It renders INSTANTLY! That's the point! There's no separate "render step" involved. If I advance to the next page, the screen flashes and the page appears -- with all the tables, figures, text, etc. already positioned as they would appear in the final, RENDERED, document.

Artificially splitting the document into "chapters" just adds more work to maintaining it. The 60 pags are one cohesive unit. It's not like describing how to change a flat tire vs. how to service the engine.

E.g., the diskless workstation document proceeds from "how to configure the BIOS to PXE boot"... through "how to configure the server to satisfy the BOOTP/DHCP requests" ... to "which executables and configuration files need be present on the NFS-exported filesystem image accessed by the booting target in order to get to a 'login' prompt".

Having documented all of these requirements, I now have to progress to "what else should be added (made available on the exported filesystem) to make a usable system".

Well, I need a shell... Ah, but, looking back over the initial pages, I can see I've already documented including the /bin/sh exectutable on that exported filesystem -- along with the libraries and configuration files that it requires. But, I've not HAD TO include /bin/csh. So, what files does *it* require? (list them and the commands to make them available in the exported file system).

Gee, the system probably want to allow its user(s) to send and receive email. So, which executables do I need to drag into the exported filesystem to make "mail" available on that diskless system? Which configuration files are required? How do I tell it where to find the REAL mail server? etc.

Having all of the 60pp of information accessible in that single document means I don't have to drag out "chapter N" to see if a particular file has already been included BY NECESSITY prior to this point (e.g., mailer.rc(5), hosts(5), etc.).

FM deals with "books" as the "larger documents". Therein, "chapters" tend to be separable entities. Within a chapter, headings and subheadings suffice to partition a subject into finer-grained topics.

E.g., I describe "adding daemons" in a separate section from "error logging" tools. But, neither warrants the overhead of a "chapter".

It's structure is presented in the choices of headings and (sub)headings. E.g., examining the resulting PDF shows all of these -- as well as the figures and tables (and the names of each of these) in the ToC. The page thumbnails make it easy for someone familiar with the document to locate a section of interest: "Ah, that's where the table that enumerates all of the files in the exported file hierarchy is located" (obvious because of its size). Or, "That's where the BIOS settings are enumerated" (obvious because of the "screen snapshots" it contains).

[I long ago realized the value of showing people thumbnails of significant objects in a document -- in addition to a textual ToC. E.g., a set of pictures of screen shots will do more to guide a user to the appropriate section of a document than will textual descriptions of those screens ("Ah! That's the screen I'm looking for!")]
Reply to
Don Y

Well, you can go ctl-i to get italic in most GUI word processors.

In LaTeX you can use \newcommand to do strange things, then just change the definition if you want it to be different.

The first edition of my book was done in WordPerfect 5.1+ for DOS, which I still use for some things. I converted it to LaTeX for subsequent versions, using a very handy tool called WP2LaTeX that somebody helpfully wrote. It wasn't perfect, but it did a good 90% of the grunt work. (Almost wrote 90\%.)

WP was able to repaginate, re-index, and generate all cross-references and footnotes for a 750-page book with hundreds of figures and equations, all in 640K of memory, and it did it without crashing, which all the contemporary (pre-2007) WYSIWYG tools couldn't, even with a couple of gigabytes of memory.

I use a regular makefile for the book, and it's in one big file, which I prefer, because it's easier to keep all the cross references straight.

For things like papers and manuals, I've been using TeXStudio, which I like pretty well. It renders using DVI, which is much faster than PDF, so hitting F5 shows you what you did in a few seconds, a lot like the view window in WPDOS.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC 
Optics, Electro-optics, Photonics, Analog Electronics 

160 North State Road #203 
Briarcliff Manor NY 10510 

hobbs at electrooptical dot net 
http://electrooptical.net
Reply to
Phil Hobbs

when I was at uni everyone used auctex

formatting link

-Lasse

Reply to
Lasse Langwadt Christensen

Don't dismiss LyX so quickly. Here's my testimonial: I forgot all my LaTeX chops because my current job requires emails and occasional Word, even though 20 years ago I used LaTeX for everything (papers, presentations, thesis, letters, Christmas cards, etc).

Recently, I translated a Chinese manual for a signal generator, by editing output from Google Translate. It turned out that it required a lot of automated touch-up to format figures, tables, etc, so I chose LyX and wrote Perl scripts to massage its LaTeX-like on-disk form. I was pleased with the result.

By the way, the LaTeX family has a strong integration with tools like Octave, Maxima, and such. It allows you to edit your formulas or data, and re-generate output equations or plots. You can do this type of thing with Word using COM integration, but it's not as powerful as LaTeX.

Reply to
Przemek Klosowski

And, will you be repeating that exercise again, any time soon? (one time events have different significance than ongoing tasks)

Most of the tools that I use have reasonably well documented ASCII file formats that I can massage "programmatically". E.g., I extract data from my FM documents to include in my codebase. Likewise, data from Illustrator files and AutoCAD documents. And, of course, importing documents *into* each of these.

FM will import/export SGML, FWIW. Newer versions XML. So, you're not screwed by a clack-box format (like MSWord).

Unfortunately, very few of the tools that I use "speak LaTeX". IIRC, Mathematica & MatLab have some hooks (addons) to support this. There *may* be a plugin for FM (or for a newer version thereof) to do this. But, I don't see the appeal for such a tool -- pick one or the other.

But, not true of things like Illustrator or AutoCAD (though a newer release might have added that capability... I tend to be slow to upgrade, preferring to live with the bugs I *know* than trade those for a set of bugs yet to be DISCOVERED!)

Reply to
Don Y

True, of course. But I think his point was that it wasn't difficult to "pick up" and use those tools.

Reply to
Tom Gardner

------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^

Did you notice his reference to having used them (extensively?) before?

I haven't used Ventura in more than 20 years and suspect I could edit the "raw" files that it uses just by examining the contents of one with a text editor. Likewise, the .ai files that Illustrator produces.

IME, the biggest issue is getting used to *how* an encoding "does things" and the characteristics of its syntax. You'll forget *specifics* in short order. But, the "basics" will linger in your consciousness for a long time! E.g., I doubt anyone would have any problem *guessing* what the following would produce -- in sufficient detail to be capable of massaging it (manually or programmatically) without invoking the application that created it *first*:

[/Dest/G1110618/Title(TABLE 6. First Pass Costs\021)/OUT FmPD2 [/Dest/G1109976/Title(Second Pass Refinements)/Count -1/OUT FmPD2 [/Dest/G1111052/Title(TABLE 7. Second Pass Costs\021)/OUT FmPD2 [/Dest/G1038194/Title(Third Time\220s a Charm)/Count -2/OUT FmPD2 [/Dest/G1111658/Title(TABLE 8. Third Pass Costs\021)/OUT FmPD2 [/Dest/G1111919/Title(FIGURE 2. Updated Infrastructure)/OUT FmPD2 [/Dest/G1111940/Title(Thinking Smarter)/Count -1/OUT FmPD2 [/Dest/G1112762/Title(TABLE 9. Updated Costs\021)/OUT FmPD2 [/Dest/G1112739/Title(Pushing the Envelope)/Count -4/OUT FmPD2 [/Dest/G1111161/Title(TABLE 10. Initial Distribution of Rule Component Sizes\021)/OUT FmPD2 [/Dest/G1112203/Title(TABLE 11. Distribution with First text Character Omitted\021)/OUT FmPD2 [/Dest/G1112535/Title(TABLE 12. Distribution with Default Rules Omitted\021)/OUT FmPD2 [/Dest/G1113141/Title(TABLE 13. Revised Costs\021)/OUT FmPD2 [/Dest/G1013666/Title(Other Optimizations)/Count -7/OUT FmPD2 [/Dest/G1113393/Title(Left Context Rewrite)/OUT FmPD2 [/Dest/G1114468/Title(Streamlining Wildcards)/OUT FmPD2 [/Dest/G1115194/Title(TABLE 14. Single\025Character Sibilant Handling)/OUT FmPD2 [/Dest/G1115693/Title(TABLE 15. Single\025Character Nonpalatal Consonant Handling)/OUT FmPD2 [/Dest/G1115913/Title(New Wildcards)/OUT FmPD2
Reply to
Don Y

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.