ideas from joe betts-lacroix meant to evoke ideas from you


AIs plotted in Hardware - Software space

Was thinking a few years ago about the prospects of someone developing a strong AI in the near future, and it occurred to me that if they got close by writing sophisticated software, then they could quickly get the rest of the way by throwing money at it: buying more hardware.

This presumed that hardware and software capabilities could trade off against each other. To test this idea, I reached out for a couple of examples, and found one in chess, and another from some friends in factoring algorithms.

I plotted various kinds of tasks in Hardware-Software space, and proposed that the speed of those tasks remained constant along diagonal iso-lines, and was pleased to see how well this bore out. Note that I conveniently leave off units from my horizontal axis, so you'll have to just pretend that there is some way of quantifying this, and for the moment humor me by just accepting that it increases monotonically to the right.

For the factoring task, a 1977 Apple II running a sophisticated 2007-era algorithm had roughly equivalent performance to a 1977 algorithm running on two of IBM & Livermore's ground-vibrating behemoths known as BlueGene/Ls, the fastest computer in the world in 2007.

Similarly story with two chess-playing computers, Deep Blue and Deep Fritz, both of which had about equivalent performance to a human grandmaster.

What I conclude for the case of the strong AI project is that the developers will face a choice once they achieve an AGI of sufficient power, say, powerful enough to reach the "Turing Test" iso-line. I assume they will do so using some kind of large cluster running some exquisitely brilliant software.

They could then go the direction I call "democratize", and refine the software so that people all over the world could instantiate these AGIs and start solving all kinds of hard problems, advance AI research, keep people company, etc.

Or...they could take an alternate path that I call "Push to Seed", on which they take in massive funding, and expand their hardware enough that the intelligence increases its power to the Seed AI level. This is the level at which the AI would be capable of redesigning itself for increased performance, which could then design for even more performance, etc. Whoever possesses this AI will have the worlds most powerful tool -- will they use it for good or ill? That probably depends on who provides that massive funding.

Food for thought...


What is the shape of your Object-Relationship curve?

I recently moved, and so have been thinking about objects. Which to keep, which to toss, which to buy for the new place, etc. I would ask my friends, but I know that I'd get conflicting answers. Some of my friends are concerned with function ├╝ber alles, while others are obsessed primarily with how things look. This got me to thinking about form vs. function -- and how do I feel about them?

I plotted all possible objects in form-function space and gave them names:

Of course! All I have to do is get only elegant things, and I'll be set. Unfortunately, some are not available and others are too expensive. I am going to have to learn to deal with some trade offs. How are these properties related? I am sure that more of one can make up for less of the other, so here I plot a simplistic way of showing the merit of a given object across form-function space, by drawing iso-lines of what I'll call and object's Personal Value for now:

Assuming those iso-lines describe how I deal with objects, I wonder how I could represent my friends and family members? I think they would be slanted at different angles, like this:

It's very helpful to have a way to categorize my friends; I was wondering what to call all those categories that Facebook allows me to create.

I'll bet some people have more distance between the pairs of lines. I'll call that distance a person's Object Hysteresis. Some people will hang onto that backpack for years even though it's not optimal for them anymore, while others will toss it as soon as they discover that and head straight to REI. Still others (minimalists) will toss it, but wait until they find a replacement with much, much higher Object Value, which could take years.

What do your Object-Relationship curves look like in Form-Function space?