Editor recommendation

You apparently tag *while* you are typing the original source. So, for example: Bezier or Special Cases, or See ]

If I know I will be including several large code fragments or tables, I will *plan* on importing them as separate flows so they can be efficiently handled as separate entities. E.g., anything more than ~10 lines of code warrants being set off from the main body of text as a "figure" -- to help ensure it stays together, visually, and doesn't get split up over different pages (in much the same way that a table wants to be a cohesive entity -- not just columns of text *within* the regular body of text).

So, how are you *proofing* the result? *Hoping* you got it right when you were typing all that ASCII text *prior* to importing it to your WYSIWYG previewer? *Hoping* you remembered to use "open double quote" and "close double quote" to bracket ?Bézier? instead of straight double quotes ("Bézier") or, worse, half-and-half (?Bézier" or "Bézier?)

Or, do you zoom around the entire document on "high magnification" hoping you catch everything one "peep-hole" at a time?

So, you are doing all your tagging *before* seeing it typeset. Oh My! instead of benefiting from a GUI to apply tags as needed.

I.e., "emphasis", "heading", "footnote", "code", etc. don't exist for me *until* layout. If I import a piece of code, it *is* a legitimate ".c" file -- not "source code adorned with layout/formatting info".

What's your obsession with "trusting your typesetting system"? I trust mine as well! Difference is, you apparently choose to type in all those tags *before* the "input" has been typeset. And, you're thrilled that this isn't corrupted as it is rendered "typographically".

Mine isn't either! I just don't spend time typing "\vfill\eject" when I want to insert a page break. Or, "\line{\hfil Flush right}" to cram something against the right margin ("padded left").

I.e., you can read the *content* of one of my documents "as a normal human being" before it is imported. No concern over "Gee, what does '\hrule' mean?"

If you find something that needs to be corrected while proofing the *typeset* rendering of the document, do you correct it

*in* that WYSIWYG? Or, do you have to re-open the "source" document, locate the corresponding portion that contains the text/tag/formatting that you want to alter, tweek that, flush it to disk and then "refresh" the WYSIWYG rendering? *When* you correct it, how do you readily verify that you've corrected it correctly? E.g., in my original document, the footnote: Unqualified, the term ?Bézier?, herein, shall refer to cubic Bézier curves did not contain quotes around the first Bezier reference. They were added while previewing the typeset version ("Crap! I need to quote that!"). And, I had to make sure I wasn't quoting with the "wrong" quotation marks. Just zoom in (TO OVERCOME THE LIMITATIONS OF THE MONITOR) so you can see that you are typing '?' and '?' and NOT '"'. Or, open the *source* document and type "\lq\lq" and "\rq\rq" [I'd rather *see* the effects of what I'm typing *while* I'm typing it -- instead of having to "refresh" the rendering and remembering to double-check that (along with any other changes I may have made)]
*I* can "zoom in" (and out!) too! But, each time you do that, takes time and effort. And, takes time for you to "get your bearings". Should I "zoom out" so I can see how any text I am inserting effects the layout of other objects around it, on the page. Then, zoom in to verify that I have typed what I *think* I was typing?

How do you add callouts to your illustrations? To as great an extent as is possible, I *don't* include text in "images" that I import (hard to do with schematic fragments!)

E.g., all of my Bezier curve examples are created in Mathematica and imported without any text annotating the points, coordinates, axis, etc. Instead, these are pasted on as callouts *after* the image has been imported. This allows me to ensure that the designations for each point appear in the same "font" and representation that they are referenced as in the body text.

[One problem has been getting Mathematica to use the same "colors" that the publishing program uses! :< ]

It also lets me change my mind as to how I want to reference them without having to revisit Mathematica to "tweek" the image I've asked it to generate. For example, I originally labeled the points A, C1, C2 and B (A and B being endpoints while C1 & C2 were control points). When I typeset the equations for the curve, it was much easier to change this to P0, P1, P2, P3 -- as the curve could be expressed as a linear combination weighted Pi. I could make that change

*within* the publishing program without requiring any changes to the "image" onto which they were laid.

From your descriptions and my *assumptions* as to how you work, I gather you don't spend much time preparing these sorts of documents. That it's more of an "accessory" activity for you. Someone else gives you a formal specification that you can just *read*. Someone else prepares the final documentation for the user.

*Your* documentation can fit in comments in source files, etc.

I tend to produce about 4 pages of formal documentation for each page of code: a page of spec, a page of "documentation" (explanations of the implementation, etc.), a page of test strategy and a page of "user documentation". Some subsystems tip this balance one way or the other but it tends to average out over the course of a project (hardware projects have different sorts of documents but similar "effort weights")

[Of course, this lets me make my source code "more dense" because I can move any lengthy descriptions, explanations, derivations, etc. out of the commentary and *assume* anyone reading the code already understands the requirements, theory and structure laid out in those documents. The source code is then "just an implementation" :> ]

So, I can spend 30 pages describing characteristics of cubic Bezier curves that my code relies upon -- and, then never have to explain *why* I am doing something *in* the code that relies on those objects.

Of course, preparing such documents is a lot more "expensive" than just adding commentary to some source. But, the goal of documentation is to ensure folks *understand* the issues being presented -- not just a "check off" item that you can claim you have satisfied.

[To that end, my introduction of multimedia and interactive "demos" to the documentation will hopefully be a net asset. Provide a richer means of presenting concepts instead of just relying on lots of glyphs on paper...]
Reply to
Don Y
Loading thread data ...

Sure, when I type a heading, I know *now* that this is going to be a heading, so I tag it *now* (instead of in a second pass, where I could overlook it).

They look different even in normal font sizes. And to know for sure, there's a search function to search for things like straight quotes, or closing-quotes-not-followed-by-a-space.

If it need be, sure. Normal-size text can be read at normal magnification, on page per screen. Formulas with small subscripts need zoom.

Yes. (And the problem with usual GUIs is that they can only display "italic", and cannot distinguish between "loanword", "new term", "name of a person", "name of a publication", or other reasons why one could want a word in italic. Semantic tagging during input also allows to make things like "I don't want this highlighted, but I want it in the index".)

"Before it is imported". So the difference might be that you write something, and then import it somewhere else. I prefer directly writing for the target system.

"Write and import somewhere else" is what I use if someone wants me to use Word to print C code, or something like that. I write the C code in Emacs and import it into Word :)

But even when I write something not in the target markup format, I try to tag it a little. That's one area where XML transformations come in handy, for example. Or little one-screen Perl scripts.

Precisely.

And, being used to programming, I find this workflow very natural.

It's not a good workflow for things like party invitation posters. But for tech manuals I find it optimal.

Works fine with LaTeX and TikZ (or xfig).

(But, yes, it needs getting used to. And, admittedly, I don't use many pictures, and for UML diagrams exported from a modeling tool, or plots exported from, say, Excel, I don't care for fonts. After all, it's tech docs, not a glossy magazine.)

Stefan

Reply to
Stefan Reuther

Headings are always short/terse (because I don't like headings that "wrap" onto a second line and I usually use a two column format... not much room for a lengthy heading). So, "a few words" in a paragraph by themselves is a visual cue when the text is imported.

I go through the document and click once *in* each such paragraph, then click "Heading". The text is reformatted on that second click to carry the attributes associated with a "Heading". At the same time, I similarly tag any "Subheading"s that I come across. So, by my first pass through a document, all of these "stand out", the basic section numbering (autonumbered) is in place as well as any page headers annotated (top of each page -- in the margin -- shows the name of the first heading encountered on that page... an aid to finding "sections" without having to visually scan the contents of each page as you thumb through the document).

If there are other "special paragraphs" in the document (like "side headings", "display quotations" or even short "code" fragments), I will tag them as well. Basically, anything that has a special "visual treatment" that you'd be able to recognize just from coarse appearance (e.g., "display quotations" are usually set off from surrounding text, indented and displayed in an italic or script/decorative "font")

Because you only need to click twice to tag such a paragraph (once *in* the paragraph to select it -- no need to highlight all of it's contents! -- and once to pick the tag to apply), You can go through ~100 pages in a minute or two (literally).

Then, I create a nominal "figure frame" (a first-class object into which text/graphics/imagery/etc. can be imported). I try to keep most "figures" to be a consistent size -- not too small and not too large (though there are always exceptions). Within this figure frame, I introduce a second frame containing the caption text "Figure : ". I then insert this composite "captioned figure frame" at specific points in the body of the text.

By convention, this is almost always immediately after a sentence similar to "Figure X illustrates the relationship between foo and bar." This ensures the actual frame will be inserted *after* the text that is referencing it -- even if that forces it onto the next physical page.

This sentence will be in a paragraph that is immediately above a terse (in the same way that the Headings and Subheadings were terse) paragraph beginning with "FIGURE" (my convention: "insert a figure, here"). The text following the word "FIGURE" will be the caption for the particular "Figure" that will be contained in this newly inserted "figure frame". So, I cut and paste it into that "caption frame".

The act of inserting the captioned frame has created a unique (serial) identifier for that figure. And, the "Figure X" body text referencing it can be replaced with a "cross reference" tag so the referencing text (immediately!) reflects the actual identifier for that figure. This also makes the automatic creation of the "List of Figures" a piece of cake! (ditto for Tables, as below)

In some cases, a figure may be nothing more than a "chunk" of text that I want to treat as a solid entity. E.g., the code for a function -- which might want to be typeset as a single wide column instead of a pair of narrower columns).

Once all the figure frames have been inserted, I go back and paste the images, illustrations, graphs, etc. for each of them into their respective frames (these are each freestanding files, typically, so this just embeds a reference to the "image" in the frame -- but, the contents of the file appear on screen in that frame so I can verify that I have selected the correct file, scale it to fit the frame, etc.

Tables are a bit more complicated as they often have different forms -- number of columns, rows, heading rows, ruling between columns, etc. And, some really large tables might want to be set in their own frames (just like the figures). I have several documents that have tables that span 5 or 6 pages! Obviously, I want to be able to manipulate these as objects and not let the program decide where they should reside and how text should flow around them!

After this, I build any equations that have to be inserted. If these are simple (e.g., polynomials, rationals, etc.) they are almost like typing in regular text (though the range of symbols used and how they are typeset requires special handling).

OTOH, if they involve more complex "decorations", delimiters (different levels of parens/braces/etc), varying height elements (integration, summation, product, matrices, etc.) then it can be pretty tedious to get things right. Esp if it is a series of equations showing how something was derived.

Once the equations, tables and figures are in place, I can see how the layout has been mangled to accommodate their individual space requirements. Given the large frequency of these in my documents, it is hard for most automated tools to come up with an efficient layout that doesn't inject lots of "wasted whitespace": "Gee, I wasn't able to fit this figure in that 3.4 column inches at the bottom of page X -- cuz it's frame is 3.5 inches tall. So, I've left a big empty space, there, and moved this to the top of page X+1. Ah, but that caused the table that had previously fit at the bottom of page X+1 to be split such that half resides on X+1 while the other half nor resides on X+2 (But, don't worry, I took the liberty of modifying the caption for that table to append "(Continued)" to it *and* also made sure I replicated the heading rows(s) for the table onto that second fragment)" As a result, I typically have to do a lot of tweeking to make things more visually pleasing -- adjust the sizes of image frames, elide a word/phrase from a paragraph to eliminate a widow/orphan, add some embelishment to the text to make a "big hole" less empty, etc.

But, you have to resist doing this prematurely as the document often has *semantic* changes required. So, I read through it, carefully, and verify that the issues I present are clear. Often this means adding a footnote to some body text (or, caption text or even text *within* cells of a table!) This further alters the layout of the document. Sometimes, adding a single word to a footnote has dramatic consequences to how the rest of the document lays on the page -- because that word caused a footnote to wrap onto a second line (even at 8 points, a line is a line!) which caused something else to fall out of that column, page, etc.

It's during this first reading where I tag *words* and *characters* (previously, I've only tagged *paragraphs*!). "Character tags" are a disjoint set from "paragraph tags". So, I can have a "Code" character tag that I apply to individual words or characters *within* a paragraph along with a "Code" paragraph tag (which, conveniently, visually resembles the "Code" character tag) that applies to paragraphs.

So, in body text like: "The alt statement is used to multiplex sources from the different channels which can source events -- the data stream and the timeout channel." I can select "alt" and apply the "keyword" tag -- which is a special form of the "code" tag (e.g., fixed width "courier", emboldened).

This is where I search & replace "etc.", "et al.", "i.e.", "e.g." etc. with "" (which visually renders them in italics versions of whatever font they are currently typeset as -- but, semantically tags them as "foreign words").

Similarly, words/phrases that seem to require "emphasis" as I am reading the text get tagged with the "Emphasis" character tag. These also appear in italics -- but, there are different semantics involved (from "foreign language" or "article title" or any other character tags that *happen* to appear as some form of italics).

Being able to make these changes *interactively* and see the (visual and layout) consequences immediately helps guide any further changes that I make. ("Hmmm, if I apply that tag universally, then all of this text is going to stretch a wee bit and I'll end up filling up this void *without* having to resort to fine kerning changes...")

As I said, the presence of these large typographical objects makes my documents tedious to typeset effectively ("in a visually pleasing manner").

You (at least, *I*!) don't want to have to think of every possible thing you might have to verify and, thus, invoke a search function to locate. It is *so* much easier just to run your *eyes* over the resulting page and notice, "Gee, that should be straight quotes instead of curly quotes" or "Yes, those quotation marks

*should* be immediately followed by a period as they enclose the last word in that sentence".

I don't take out a magnifying glass when I am reading a *printed* version of the document. Why should I have to use one when I am

*typesetting* it? :>

Ah, get better tools! :> I can inspect the tags (both character and paragraph -- cuz both can be in effect at a given point) that are "in play" at any point in the text (location of cursor) by looking at the status line as I move the cursor along. I can see where each "reference point" in the text occurs (without affecting the actual spacing or layout of the surrounding text).

E.g., an index entry tagged to a particular point in the text; a particular figure frame's "insertion point" (which, on inspection, is a reference to the actual *frame* located elsewhere and containing that frame specific "marker"); the *text* (fetched *from* another document) associated with a reference to a paragraph in that other *document* (See section 23, "Output Characteristics" in the "System Hardware Reference" document); etc. And, all of them in terms that a "mere human" can relate to (no "\xref(ref "SysHard.doc", foo)" notation).

When you write your code, do you embed the typesetting commands

*in* your source? (e.g., LP) Or, do you *add* those AFTER you have imported it into your documentation?
*And*, add the tags *in* word!

When I publish source in a document, the source remains UNCHANGED! No bugs creep in because I accidentally mangled the source while trying to INJECT typesetting directives.

I despise documentation that contains technical typos. E.g., I should be able to literally type any code, schematics, etc. that is contained in your documentation and EXPECT IT TO WORK as you have described it to work (in that documentation). I shouldn't have to "debug" your typesetting.

"Hi, I've reproduced EXACTLY what your XYZ-123 document sets forth on page 84 and it's not working. I've looked at it and can't imagine why its not working. What have I done wrong?"

("Ah, sorry. Not *your* fault! The compiler switches listed there are in error. That should be "-g" not "-G"!")

I don't. It means I have to maintain *two* documents concurrently (even if one is just a "memory buffer"). I make my changes to the underlying "source" *through* the viewport that the GUI provides me. So, I know that what I am seeing is actually what I *have* (have I saved the source file and not yet updated the GUI? Have I saved the GUI but made changes directly to the source file which have not yet been saved and *refreshed* in the GUI? Too prone to error when you think you're "done" with one -- only to discover that the *other* is "more recent")

So, you ensure any graphics imagery you may have been editing is flushed to disk, then refreshed in the GUI; ditto any schematics; source code; etc.? I guiess I'm just old and senile -- I could never be sure I had flushed *everything*, refreshed *and* REVIEWED the typeset version of the composite document before closing down a work session each day. I'm *sure* I would OFTEN find the document wasn't "as I remembered it" the next morning ("Gee, I thought I had changed that paragraph, yesterday?" or, "Funny, I don't *remember* that text as not fitting in its frame when I looked at it yesterday...")

I find documentation that is easy on the eyes tends to get more use than stuff typeset with a lineprinter (Gries' _Compiler Construction for Digital Computers_ being a great example of the latter!). Screenshots that are too big, small or *coarse* are tedious to look at. B&W photos (esp if reproduced xerographically!) obfuscate instead of enlighten. Typos, inconsistencies, inaccuracies, etc. confuse instead of enlighten.

[Of course, "novices" who go hog-wild with "fonts" and "frills" and colors work against those goals!

Documents are supposed to convey information. Good documents convey it accurately and effectively. E.g., I can "show" a "reader" the consequences of a particular formant synthesis with

*sound* much easier than I could explain the characteristics of those sounds "in text" -- to all but a veteran speech pathologist! And, *much* easier than trying to describe that in the abbreviated format of "source code commentary"!

"Ah, so *that's* why we have all this bizarre math happening on lines 93 through 167! Omit it and you lose this characteristic!"

Reply to
Don Y

Don,

Not that I want to be a usenet topic policeman, but why would a group on embedded firmware be interested in how you typeset your documents? It just seems like an odd place to attempt to stimulate such a discussion.

--
Randy Yates 
Digital Signal Labs 
http://www.digitalsignallabs.com
Reply to
Randy Yates

As with *most* USENET posts, it was an evolution triggered by the last paragraph of my initial reply to Roberto's initial post in this thread:

Be thankful you're just writing code! When preparing publications, you want to see a good fraction of the page (to get a feel for how it lays out, etc.) which drives text size down. Yet, you still want to be able to resolve different typefaces/styles conveniently ("is that italics? or, just excessive jaggies from the lowered relative resolution??")

There comes a point where you just can't get a monitor *big* enough!! :-/

which was related to the effectiveness of particular editors in massaging source documents.

The beautiful thing about USENET is, you don't have to read anything you don't want to read! :> E.g., I ignore all the political rants.

OTOH, you *may* pick up some tidbit about how others work if you *do* chose to read something *apparently* unrelated. I've learned about lots of tools and technologies from "off topic" digressions over the years. Things that I probably would never have sought out had I not heard others discussing them (favorably and unfavorably).

Thankfully, I always post from the same account, with the same "From" line, etc. so it's a lead pipe cinch for folks to add me to their kill files if they so desire. A bit more involved for me to do the same to those folks who enjoy profanity, political rants, etc.

Reply to
Don Y

^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Also not wanting to be a 'net cop...

From the initial CFV for this newsgroup:

----8

Reply to
Don Y

I've been running Codewright v7.5 on my Win 7 Pro machine for almost 2 years. Works great and is my everyday editor, primarily for VHDL. I also have a syntax coloring file for Xilinx UCF files. I've thought about switching to something more modern and made a few weak attempts. But Codewright fits like a glove so I continue driving it...

Reply to
Paul Urbanus

I always thought editors were a very subjective, personal thing and you never get agreement :-). I had a look at Emacs years ago, but never really got to grips with it. May have been biased, but it reminded me of the single line editors on teletypes, which i've also had to use and the feeling was that it just belonged to another age. I want to program C, for example, not learn how to program an editor, which I really only expect to edit text and nothing more. It's just so much more natural to have full screen gui based editing. All the submode stuff that you get with some editors is really just noise and complication for no reason, but of course, in the days of teletypes and serial line terminals, it was the only way to do it.

I have an original copy of the DEC Teco editor manual somewhere, and I did try that as well, but it's even more arcane and impenetrable than Emacs...

Chris

Reply to
chris

Downloaded the windows C/C++ version for a quick look. 95 Mbyte download and ~200Mb install, according to the install program. Took about 18s for the initial load but everything instant once it is loaded. Like most things Sun, target can be localhost or a network address. The default debugger is gdb, which suggests that it may work with openocd and makes it very interesting indeed. Gui has a nice lightweight feel about it, which is more than can be said for some of the other offerings.

Oh yes, and over 700 plugins available !!!...

Chris

Reply to
chris

Posting test...please ignore.

Reply to
Paul Urbanus

What if you want to use linux? And you shucked out $300?

--
Randy Yates 
Digital Signal Labs 
http://www.digitalsignallabs.com
Reply to
Randy Yates

When our editor oc hoice disappeared, we ended up with UltraEdit which is now available for Windows, Linux, and Mac OS X. Not free, but the price is very reasonable. The tech support is excellent.

Stephen

--
Stephen Pelc, stephenXXX@mpeforth.com 
MicroProcessor Engineering Ltd - More Real, Less Time 
133 Hill Lane, Southampton SO15 5AF, England 
tel: +44 (0)23 8063 1441, fax: +44 (0)23 8033 9691 
web: http://www.mpeforth.com - free VFX Forth downloads
Reply to
Stephen Pelc

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.