Jocelyn Ireson-Paine

I am a freelance programmer. My Website says more about this and other things I do.

But I was tempted into category theory, not least by my friends Hendrik Hilberdink, Răzvan Diaconescu, Petros Stefaneas, and José Bernardo Barros. And their supervisor, Joseph Goguen.

Using experience gained from writing Web-based simulations of the economy, I have implemented some Web-based category theory demonstrations. Their main input page is here; and there is one more that I’ve adapted to use the same notation as the nLab product page. If I could get funding, I’d like to make these easier for novices to learn from. Jamie Vicary offered to host them on the Oxford Computing Lab’s servers: a good next step would be to tidy up and document the whole system, then make the source publicly available.

I’d also like to popularise category theory through my blog at Dr Dobbs. That’s partly because I find it intrinsically fascinating; but also because I feel it has huge potential in Artificial Intelligence and cognitive science. I’ve written about that in an n-Category Café posting. Places where I think category theory could benefit AI and cognitive science include neural networks and their relation to logic and model theory, analogical reasoning, and holographic reduced representations.

One benefit is the tools category theory gives us to “achieve independence from the often overwhelmingly complex details of how things are represented or implemented”. (I quote Joseph Goguen’s A categorical manifesto.) For example, holographic reduced representations represent symbolic structured data as high-dimensional vectors, storing and extracting items via operations similar to correlation and convolution. They can use the same operations to do analogical reasoning of the kind “What is to Sweden as Paris is to France?”. This is important because any intelligent machine needs to do such reasoning; but it’s something computers are notoriously bad at. If we could characterise HRRs categorically, it might help us discover which properties of their representational substrate are essential, and which merely accidental.

Perhaps — I don’t know whether I’m joking or not — categorical characterisations could help us understand other phenomena. Could we find a universal property that a computational system satisfies if and only if it is conscious?!

Or — a simpler question, and one that must be sensible — that it satisfies if and only if it can represent and reason about itself? The latter would be an interesting project in dynamical systems; but perhaps it’s assuming too much to regard the reasoner as a dynamical system.

Category theory also gives us tools for unifying disparate mathematical and computational phenomena. AI and cognitive science, of course, are full of disparate mathematical and computational phenomena, usually ones whose relatedness is badly understood. That is why my n-Category Café posting above mentioned neural networks: in it, I cited Michael Healy’s paper Category Theory Applied to Neural Modeling and Graphical Representations. He uses colimits, functors, and natural transformations to map concepts expressed as logic onto concepts represented in one kind of neural network.

As another example, I cited a paper by Goguen on analogical reasoning via “conceptual blending”: Mathematical Models of Cognitive Space and Time. He uses institutions and 3/2-colimits to (for example) represent the meaning of the word “houseboat” as an optimal blend of the meanings of “house” and “boat”. Could we apply the same constructions to HRRs? That would unify two kinds of analogical reasoning implemented on very different representational substrates.

And as yet another, I suggest at generalisation as an adjunction that generalisation can be represented as an adjunction. More precisely, that generalisation and instantiation can be represented as an adjoint pair. Perhaps this could unify lots of different topics in machine learning.

Read the following quote from Greg Egan’s novel Incandescence. That’s how I want category theory to unify cognitive science and AI:

‘Interesting Truths’ referred to a kind of theorem which captured subtle unifying insights between broad classes of mathematical structures. In between strict isomorphism — where the same structure recurred exactly in different guises — and the loosest of poetic analogies, Interesting Truths gathered together a panoply of apparently disparate systems by showing them all to be reflections of each other, albeit in a suitably warped mirror. Knowing, for example, that multiplying two positive integers was really the same as adding their logarithms revealed an exact correspondence between two algebraic systems that was useful, but not very deep. Seeing how a more sophisticated version of the same could be established for a vast array of more complex systems — from rotations in space to the symmetries of subatomic particles — unified great tracts of physics and mathematics, without collapsing them all into mere copies of a single example.

Incidentally, category theory also inspired me to invent a way of building spreadsheets from modules, and of storing these modules on the Web for anyone to download into their own spreadsheets.

Note the Markdown method for block-level code (which cannot produce a blank line at the beginning or end of the block), now also at the Sandbox. —Toby Bartels

category: people

Revised on January 11, 2011 21:04:41
by Jocelyn Ireson-Paine
(188.28.117.189)