Sampling: What Nyquist Didn't Say, and What to Do About It

Get a copy of "Drawing on the right side of the brain", read it, and do the exercises. You'll be surprised what you can learn to do when you learn to suppress the left brain.

He's trying to figure out how to get a woman interested in the idea.

Clifford Heath.

Reply to
Clifford Heath
Loading thread data ...

As I said, I can draw other things quite well -- including other "organic" things (still lifes, landscapes, etc.).

*And*, can tell you what is wrong with *your* drawing of a person. So, one would think these two things would *imply* the ability to do so myself! :<

I'll put that "inability" in the same category as "whistling 'wrong'", etc.

[if I'm going to invest that much time learning something, I'd rather learn how to ride a unicycle -- but I fear my bones are too old for the experience!]

ROTFPMP! I will *have* to remember THAT one!

Reply to
D Yuniskis

Thanks! Two of the PDFs didn't open for me (I'll try on a different system). And, the math is far too much for me to digest at the moment.

But, it looks like this describes the process completely (though probably in less detail than I can comprehend :< )

Reply to
D Yuniskis

Most of it isn't too difficult to broadly follow. Like many things in software, details matter and there are some details that still elude me. I'm particularly interested in the details for the 128 element vectors that are generated (over the entire picture, parts of it, etc.) and exactly how these are invariant over affine, magnification, and intensity changes. Actually, there are a lot of details that need to be clearly laid out. But I get the general idea pretty well from the presentation given at ICCV'03 by Brown.

If you get a chance, view the view I also posted separately. It adds something to hear the presentation take place. It even includes such details as their criteria for different algorithm choices depending on spatial frequencies for blending and constant factors they use in likelihood estimation. It's worth a moment or two.

Jon

Reply to
Jon Kirwan

I understand entirely. Sometimes it is fun to play with software, try it out, and see how it works (or doesn't work). But most of the time, software is just a tool, and we expect it to work as it should.

Actually, Linux distributions have pretty much solved this problem years ago - tools like "apt" and "yum", or their gui front-ends, are excellent at finding and automatically installing all the required packages, libraries, etc., that are needed. So on a Ubuntu, I just type "apt-get install scribus", and apt pulls in python, cups libraries, and whatever else scribus needs. One can make many accusations about this system, but not bloat - it is very much about re-use and sharing of packages and libraries.

In fact, I find one of the biggest problems with the Linux way of handling packages is that it is hard to get the bloat you sometimes want. For example, if you want to install two different versions of the same program, it is easy in windows - just pick different paths during installation (assuming it's not a program that claws its way deep into the system and the registry). With Linux, this needs a lot more thought and work - the standard installation procedure is so easy, and so automatic, that it is hard to do non-standard installations.

How is that any different from anything else you install on your machine, Windows or Linux? You are also putting your faith in the programmer that wrote the software in the first place. All I can say is that it is worth getting the main parts of your system from a source that has good build and test procedures - when your distro comes from major players like Red Hat or Ubuntu, you can be reasonably confident. As you stretch out, getting packages from smaller groups or additional repositories, you might get more problems - in particular, the testing will not have covered the same range of systems.

Building from scratch is sometimes the best answer - I have typed the "./configure && make && make install" mantra many times myself. It can also be educational. But it is a lot more time and effort. Normally I expect software to just work straight out the box, whether it be a "setup.exe" in the windows world, or an "apt-get" or "yum install" in Linux. And most of the time, my expectations are justified (or they can be lowered until they are justified...)

Reply to
David Brown

Lots of snipping to save electrons - your post was an interesting read, but I don't think I could add anything by commenting on much of it.

TeX also breaks things up into little boxes (mostly hbox's and vbox's), but most of that is handled behind the scenes.

DTP for a document like this is more of a visual art, and thus an interactive program is essential.

lyx is more of a glorified TeX-specific editor - it doesn't attempt to give you a true "live" view of your output document. I tried it a bit, but I didn't like it - since it can't handle any of the more interesting features of (La)TeX, it gives you very little. The real fun with LaTeX comes when you make macros and "program" your document - you can't see any of that with lyx. And because lyx is not quite standard LaTeX, you don't have such easy access to the enormous numbers of existing packages, styles, and add-ons.

pdf is ideal for delivering documentation to others. But sometimes it is necessary to work together on a document - I write some, someone else writes other parts. And that means using the lowest common denominator tool, which is typically a word processor. If I can at least get other people to use styles properly, then I can usually cope without tears.

I never claimed that following these typesetting rules would be easy!

Conflicting requirements are always a challenge - it's what makes the day job fun. Hands up all the engineers in these newsgroups who have been asked to make something small, good /and/ fast!

Be careful - you'll end up with a job for life.

Reply to
David Brown

My recollection of that is different. FM tries to treat every bit of *text* as a little "unit" (not every bit of page real-estate). [the term "paragraph" is approximately correct -- though it also applies to more than *just* (classical) paragraphs...]

E.g., the caption on an illustration, the text in each "cell" in a table, the page number at the bottom of the page, etc.

You can apply "styles" to almost everything in FM -- "paragraphs",

*characters*, tables, pages, etc. So, you might apply the "chapter title" PARAGRAPH style to the text: "Earthworm Mating Habits". This might cause the text to appear in a large decorative typeface, right justified on the line, 2.7 inches down from the top of the page, etc.

Within that "string", you could apply the "draw attention" CHARACTER style (all these names are user defined) to the substring 'Mating'. This might, for example, cause them to be displayed in a different typeface, as small caps, bold, italic and in red ink (thereby "drawing attention" to them! :> )

When the table of contents is built, that string will appear (by virtue of a cross-reference) yet will have a different "paragraph" style applied to it -- perhaps "TOC chapter". This would undoubtedly use a *smaller* typeface with different margins (so it appears in the right spot in the ToC), etc.

I think Word (et al.) have similar capabilities wrt "styles".

I've found that to be the case for almost everything I "typeset". I use *lots* of illustrations, tables, cross references, etc. Trying to insert them in the text directly (as if writing HTML in vi(1)) is too tedious (FM supports a "visible" file encoding that you technically *could* "edit" directly).

It's much more expedient to just race through the document "tagging" paragraphs with the "right" styles, etc.

Likewise, inserting a table or a photo is much more intuitive (, style "Short summary"; ; etc.) And, you can then scribble on things directly (e.g., adding callouts to an illustration).

Oh. :< I thought it was a WYSIWYG layer atop TeX. :-/

You might find it easier to just let folks "feed" straight ASCII to someone who does the editting, etc. This helps ensure a consistent "style" imposed on the results. Usually, getting the *content* right is where most of the work lies. E.g., if someone explains/describes something, I can completely rewrite it rather quickly (much faster than if I had to come up with the content myself). Then, it "reads" consistently with the other parts of the document (also, many people are terrible writers and are happy to have someone else "dress up" their prose.)

Yes. Now imagine marketing, manufacturing and engineering all telling you *different* goals -- and none actually doing any of the *work*! :>

No, some wanted an "boss-worker" relationship. If I'm *giving* my time, I sure don't *want* (nor NEED) a "boss" -- especially when I'm the one with the DTP experience! :> (no hard feelings; I just brought them a plate of cookies last week!)

It appears they found someone to take over the task (though it looks like it took them a full year to do so :-/ ). This is good as I believe public libraries to be a real asset (though I am deeply disappointed at how *loud* they have become!)

Reply to
D Yuniskis

I played with the Windows version for an hour or so last night. (sigh) It's got a *long* way to go. I would consider it: "Wordpad with Tassles" (does more than Wordpad but not enough to make you want to adopt it IN PLACE of Wordpad).

You might consider looking at the FM "tryout" (I think Adobe still offers one?). I don't recall how it is crippled (features, time, etc.). But, you would be able to see the sorts of things you can do and the effort required to do so.

(E.g., for scribus, I tried to create a little document with a table, a photo, an illustration, etc. -- just to see what it felt like)

Exactly. I doubt many carpenters go home after work and play with their hammers! There are enough things that I *must* (or *choose* to) do with a computer so spending extra time is not high on my list. Sort of like "window shopping" (do you REALLY have that much free time that you can waste it passively looking at merchandise without an *intent* to purchase??)

The *BSD's have a "package" system with the same functionality. As with any "true" UN*X tool, it just pieces together actions performed by other (existing) tools. E.g., consult "database" for package (to identify dependencies), "install" dependencies (which requires examining database for that package's dependencies, etc.), etc.

My "bloat" comment is that everything calls other "packages" in. And, that those packages tend to have been created independant of the packages that rely on them.

So, for example, if a "program" (getting away from notion of "package") needs to be able to *display* a PNG, it drags in the *entire* png package (which has far more capabilities) instead of *just* that "display PNG" capability.

If you use the *BSD "package" system, you suffer the same fate. Things end up (in the file system) wherever the package "author" decided to put them. And, you get exactly the "features" that he decided upon when he created the package.

The *programmer* is different from the "package maintainer"! E.g., I wouldn't question Wolfram's abilities. OTOH, when "John Doe" *packages* his creation for my use, I don't have anywhere near the confidence in John Doe (how familiar is he with the actual "product"? How familiar is he with the package system? How clever is he at getting a package to support flexible configuration? Is this something he just does "in his spare time" -- or, is he passionate about it?).

Me! ;-)

Exactly ---------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

This is why I am slow to upgrade... "chase the bleeding edge". If I have something (tool) that works -- at least "well enough for my current needs" -- then why waste time "upgrading"? (you can spend every waking minute "upgrading" *something*)

When I build from scratch, I see what goes into a "package"; what options are present in that particular "configure"; etc. I also get a chance to look at what decisions/conclusions configure (as well as the rest of the make) comes to ("Um, I have libfoo installed! Why didn't it *find* it?").

The pkgsrc system in *BSD lets me explore packages without embracing them. For example:

"make fetch-list" gives me a list of everything that has to be downloaded (not currently on my system) to build the package. The process recursively examines each dependency so I can get a feel for how much "work" is involved -- and how much potential there is for "fixups".

"make fetch" obviously *gets* whatever is needed (using URLs listed in the package's definition file)

"make extract" unpacks things and sets up a "work" directory in which to build the package.

"make patch" applies any particular patches (including those that are specific to the package system itself -- like rewriting the original *fetched* distribution's makefiles to site the results in specific places).

"make configure" runs configure et al.

At this point, I can rummage through the "work" hierarchy to see what's there and what *will* happen when the actual "make" is invoked.

I typically run "make > dgy 2>&1" so I can examine the messages emitted during the build. If that looks right, then proceed to "make install" (again capturing stderr/out) and "make clean".

The "make install" updates the database of *installed* packages on the system so anything that subsequently "requires" this package sees that it is there.

I have my own conventions for "what goes where". And, I notice some packages are inconsistent (there are guidelines for package creation but adherence is optional :> ). So, while most of these add-on packages go in the /usr/pkg hierarchy, some I will pull into /usr/local or other places (e.g., I mount /usr/pkg late and some of these are things I might want to use even with *just* the root filesystem mounted R/O)

Reply to
D Yuniskis

What do you think of this PDF?

formatting link

Any idea how I did it?

--
For the last time:  I am not a mad scientist, I'm just a very ticked off
scientist!!!
Reply to
Michael A. Terrell

It's not the most exciting reading...

The "producer" stamp just says ghostscript. So I'm guessing that you first generated a postscript file, then used ps2pdf to convert it to a pdf.

As for the postscript file, it was perhaps generated programmaticly from an existing database or table of information.

If that's the case, then you might want to consider generating the pdf file directly in the future. There are a number of pdf toolkits around

- I have used reportlab with python a number of times, and it makes it easy to generate some pretty reasonable pdf's.

An alternative is to have a program that generates a LaTeX file, and use pdfLaTeX to generate the pdf itself. This is an easy way to separate the data content from the style - change the style in the LaTeX part, and define macros for displaying the different parts of the document. Then the program part just generates a list of macro calls from the data, and you can easily experiment with different styles and effects. It would be much easier to get things like leaders, alternative fonts, clickable links, etc., in this way.

Reply to
David Brown

A database query dump into a2ps followed by pstopdf?

--
Randy Yates                      % "My Shangri-la has gone away, fading like
Digital Signal Labs              %  the Beatles on 'Hey Jude'"
yates@digitalsignallabs.com      %
http://www.digitalsignallabs.com % 'Shangri-La', *A New World Record*, ELO
Reply to
Randy Yates

Here in Norway, Christmas is celebrated mainly on the 24th rather than the 25th. So if my replies in this thread are short or non-existent, it's not because the posts are no longer interesting - it's just I don't have time to read them or reply to them.

There is no real chance of me ever buying FrameMaker (either at home or at the office), so I'm not going to bother testing it. I will give Scribus a shot some time for interest, but I doubt if I'll make much use of it. Most of my writings are technical, and I use either LaTeX if I can, or OOO if I have to. Interactive DTP is just for fun in my case.

The BSD "ports" system has a lot in common with the various Linux package managers, such as apt, yum, portage, etc. There are differences in the details and the functionality, but all are designed to make it easy to get hold of a package and any other packages that it depends on, and to keep everything updated (if you want it to).

I think you are exaggerating here. In a great many such cases, these libraries are shared by a lot of programs on the system. No "apt-get" is going to install "libpng", because the basic installation already has a dozen other programs that use the library. The same thing applies to other common libraries. And for rarer libraries, they are often made to work with the main package (and thus there is little extra). I'll not claim there is no wastage, just that there isn't much extra. It is certainly an order of magnitude better than the windows style of including copies of every library with every program.

True, but who is to say that the programmer is better at this job than the package maintainer? As an example, the typical programmer has tested his code on one or two machines, probably with the same cpu architecture. The package maintainer will test on dozens, and do builds on multiple cpu architectures, and will integrate with system testing on hundreds or thousands of systems.

You can do pretty much the same things with apt (and its underlying dpkg tools) and yum (and rpm tools). The details are different, and sometimes you need to install extra utilities, but the functionality is all there for those that want it. You can be sure that the *BSD and various Linux distribution developers learn about each others tools, and take inspiration from them.

Reply to
David Brown

Word is poor at styles. It's biggest problem, however, is not lack of style functionality - but that users are encouraged to manually format everything by selecting fonts, sizes, etc., for text using toolbar buttons rather than styles. It's a guaranteed way to make the document inconsistent. OOO at least makes it easier to use styles and harder to use manual formatting, which helps document layout.

No, lyx is more a "what you see is a bit like what you get" layer. While editing, you see results that are closer to the output than you would with a simple text editor, but not /that/ close. It will cope with things like bold and italic, some fonts and sizes, and a bit of symbols. But it won't get line and page breaks right, it won't handle macros other than predefined ones (and even then, it assumes they have the standard definitions, while (La)TeX lets you re-define everything). If you use (La)TeX in a relatively simple way, sticking strictly to the basic standard styles, then it is maybe useful. But it never suited me. Mind you, it was /many/ years ago that I tried it.

That's okay if I am accepting work from others, and am happy to do all the layout, sectioning, etc. It loses a lot if there is non-ASCII data (tables, pictures, etc.).

Reply to
David Brown

If you want a *critique*... :>

I like a "fence" on the sides of tables (doesn't have to be between columns) -- it helps constrain your eye so that it knows where the edges of the table are located (see below).

For *long* tables (also see below), I like to add shading to help differentiate one row from another. E.g., much like old computer "green fan fold". This helps your eyes walk across a line without losing track of *which* line they should be following.

For *wide* tables (which this is NOT!), I would opt for perhaps "every other" line being shaded. For longer tables, perhaps groups of *4* lines (i.e., 4 lines shaded followed by 4 unshaded lines). If there is some strategic value associated with some other "grouping" (e.g., every 5 lines for a table listing the integers from 1 to 100), then that would influence my choice.

You could do similar with "lines" under every N'th table line.

[FM lets you specify these things in the "table's format"]

The same sort of thing applies to wide tables -- using lines between *select* columns (of differing thicknesses to set apart "groups" of related columns).

I would also have shaded the background of the "header" line too set it apart from the table's "body".

The page numbers would be either "center justified" or "right/decimal justified". The "Page" heading would then be centered above that.

The column contents aren't "synchronized" with each other. I.e., if you were to draw a line under any *single* entry (in any column) and extend that line to the page's borders, you'd find the text on neighboring lines floating above or below this "reference". With *no* shaded backgrounds or lines between rows (as in your example), it is less obvious. By the same token, it makes it harder to do things like count how many entries are in your table; or find the

14th one in column 3; etc. (note that this is also made more difficult by the variable *height* of each entry in the table).

In the absence of a fence around the table, I would opt for a thin line in each gutter (see below) to reinforce the visual structure of the table (i.e., it is really one *long* table "folded" onto the page).

The "See " references I would write as "(/see/ )" (why capitalize "See"?) This makes the *reference* subordinate to the actual "datum" and helps differentiate the cross reference from parenthetical cases wherein you are expanding on an abbreviation (Automatic Musical Instruments).

Lots of ideas as to how you *could* have done it. :>

How *I* would do it in FrameMaker:

Create a 4 column "master page" layout (or, just change the current "frame" on "this page" to be 4 columns... depends on whether or not you want to reuse this stuff). Pick an appropriate gutter size (if you put sides on the table, you can minimize the gutter; otherwise, I would go for a "noticeable" size gutter -- ideally with a 1 point line down the center)

"Insert Table", 2 columns, ~100 (?) rows, 1 heading row. Specify the shading/lines that I mentioned above.

Fill in the Table Title ("Index to SAMS CM & RC Manuals").

Fill in the two "header cells" ("Brand", "Page").

Select right column ("Page"). Specify "right" justification (or "decimal" if you want to go that route). Pick typeface, etc.

Select the header row. Change to desired typeface, bold, etc. Specify "center" (justification)

Fill in table contents...

FrameMaker will fold the table into the next column once it reaches the bottom of the current column. Then the next column. And next. etc. Continuing onto the next *page*, as necessary. yet, it will still exist as "table 1" (conceptually). If you don't have enough rows, click "add row above/below" (after selecting an approximate number of rows to add -- so you don't have to keep adding one at a time!). When done, delete any unused rows.

I think MSWord has a "convert to table" capability (wraps a table around a "delimited" set of lines of text). I think it also has provisions to add lines between rows/columns, etc. I'm not sure how/if MSWord lets you 'fold' tables, though.

You can also build brute force with tabs, etc. But, that gets a lot harder to maintain (e.g., what happens if you wanted to insert "Victor Corporation of Japan" in there??)

Reply to
D Yuniskis

Apologies for not eliding all that cruft -- I got distracted looking at the PDF

Reply to
D Yuniskis

Ah! (smacks head) I had interpreted "how" to mean "how I *typeset* it"

Reply to
D Yuniskis

Exactly. While "scientists" like math because it is unambiguous, terse, etc., I take a similar attitude towards software/algorithms -- "what SPECIFICALLY are you *doing*?" In my case, "school" is a distant memory so "theory" has been severely eroded by "practice".

Kinda like trading in the Ferrari for a good pickup truck (same *power* but different way of *using* it). Then, being asked to do a few laps at Indianapolis... :-/

OK, I pulled it down. I'll try to get some "quiet time" to watch it.

Thanks!

Reply to
D Yuniskis

I think that is more a consequence of "lack of understanding", "lack of training", "expediency", etc.

E.g., FM has all those same buttons in the toolbar (even has a set of toolbar buttons to quickly cycle you *through* the various toolbars!). So, I can turn italic, bold, small caps, etc. on/off at will from the toolbar. However, I know *not* to.

Instead, a set of 4 (?) buttons above the right scroll bar turn on/off the "important things". E.g., one calls up the "paragraph style" menu (a floating window that lists all of the user defined paragraph styles) while another calls up the "character style" menu, etc.

With experience, you know to use the styles to tag "strings" with metadata (effectively). So, instead of applying bold or italic, you might apply an "emphasis" character style. Or, an "article title" style vs. a "book title" style, etc.

One of the tricks is learning what purposes you will need to justify each particular "style" -- the various roles "strings of characters" can take on in a document.

This, BTW, was something I disliked about scribus...

So what does it *buy* you? I.e., why bother with it?

Tables are tough to pass around in a "portable" form. But, pictures and illustrations can be reasonably portable.

Finding someone willing to take on the "cleanup" is a bit more challenging. People inevitably skimp on what they

*should* have done (material presented to the "editor") so the editor/typesetter ends up with the "dirty" end of the stick...
Reply to
D Yuniskis

--
For the last time:  I am not a mad scientist, I'm just a very ticked off
scientist!!!
Reply to
Michael A. Terrell

It was a HTML page, created by importing a comma delimited database file. I used search & replace to break the data into cells and lines, then added a header and footer to the raw table.

Then it was printed to PDF995, which is a Ghostscript shell. It took me about five minutes to convert the raw data into the PDF.

--
For the last time:  I am not a mad scientist, I'm just a very ticked off
scientist!!!
Reply to
Michael A. Terrell

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.