Kanerva on "Computing with 10,000-Bit Words"

This article from 2014 is Kanerva's 7-page summary of the then current state of Sparse Distributed Memory, Spatter Codes, and related areas, with an extensive reference list.

The computational engine Kanerva describes here is something that the denizens of S.E.D. could implement in a small FPGA, should it be useful to do so. A likely use would be content-addressed memory that can tolerate considerable noise.

The 10,000 bits is the minimum that works reliably, and Kanerva speaks of using 65,000 bits, which will occupy ~8Kbytes, trivial by present-day hardware standards. This would fit in SRAM.

.

formatting link

Joe Gwinn

Reply to
Joe Gwinn
Loading thread data ...

What does it mean, "tolerate considerable noise"? Does that mean matching a lot of bits but not all? That would probably be implemented by counting the matched bits which can be fairly slow compared to other operations. I suppose the various speedup methods can be used to improve that time. I've looked at 12 bit adders and I've looked at adding lots of 1s, but never 12 bits worth of 1s.

But this depends on what is meant by the "noise".

Reply to
Rick C

This is a form of true associative memory, and so is far less fussy than traditional content addressed memory in that it will return the nearest item in memory, not insisting that the match be exact.

SDM is built upon sparse random vectors in a vector space of say

100,000 dimensions.

By "sparse", they mean that typically only 2% of the vector elements are non-zero.

The arithmetic operators (add, multiply, ...) are from abstract algebra, and while they often resemble the operators of integer arithmetic, they need not be exactly that.

In Abstract Algebra (also know as Modern Algebra), one defines and studies algebras as mathematical objects in themselves, the algebra we learned in high school being a special case. It's pretty simple - there are a handful of things than an algebra must possess or meet. Zero and Unity (with specific properties) must exist. Addition and multiplication must be defined to meet commutative and associative laws, et al. And so on.

I've invented special-purpose algebras to solve a specific problem.

Anyway, with this in mind, reread the section about how the math operations are defined.

Anyway, it's all pretty simple, and one could implement this in an ordinary FPGA. And part of the design is deciding just how fussy the association should be - how far apart can a lookup vector be to something in the memory and be returned as a match (with an estimate of closeness), versus saying not found.

Joe Gwinn

Reply to
Joe Gwinn

Amazing. You managed to use so many words and yet didn't even attempt to answer my question.

So how is "closeness" of a match defined? Can you explain that?

Reply to
Rick C

Number of bits that are the same, called coincidences. I assumed that you would reread that part of the article.

Joe Gwinn

Reply to
Joe Gwinn

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.