I
have neglected this blog for a week because one of my computers died and I
needed time to replace it. I write
this blog on a big Windows-based desk top. But since this machine is not online, I transfer the data , via thumb drive,
to a Mac lap-top and squirt it out over the web on the Mac. This may seem like a silly arrangement,
but I enjoy composing with a big screen, a real mouse, and a full stroke
keyboard. Yet I also need to own a portable that can be online wherever I take
it. But to have both computers online would cost more. At least, it would have cost more until
U.S. Cellular recently gave me a
freebee Wi-Fi hot spot unit to
replace my original cell tower modem. (Cell towers are the only access to the
net available in the rural area where I live.)
This
last computer lasted only a couple
years, and frequently crashed or froze up. In fact, my last five computers were that way--short lived
and unreliable. But there was a
time when computers did not crash or ever need to be replaced. I bought my first computer in 1982, and have had at
least one computer in the house ever since. About a year ago
I hauled 5 computers to the re-cycling center. They were the first five computers I ever owned. And all five were still in perfect
working condition. They were all
replaced only because they had become obsolete, not because there was anything
wrong with them. The only reason I
still had them around is that I can't bear to throw away anything that still
works. Not only did these
machines still work, but
in their entire life of service,
not one of them ever froze up or crashed.
But
somewhere in the late 1990s I
began to notice that reliability was becoming a problem. I had one machine that froze up about
every half hour even when it was new.
So why did we get utterly
reliable computers in the first generation of cheap, home computers, and have
never gotten them since? I think I
have the answer.
In
the early 1960s, when I was in the Signal Corp, our satellite communications ground
station was equipped with a computer.
It was the size of a large refrigerator laid on its side, it had 4,000
bits of ram, (ferrite rings) had a clock speed of 4 kc, and cost $40,000, which
would be about $400,000 in today's dollars. In the early 60s, there was no such thing as a "home
computer." What made
the cheap personal computer possible was the development of the
"chip." Somewhere in the
mid-60s, someone announced, with great fanfare, that they had
succeeded in placing two transistor junctions on the same silicon wafer.
And within a year they had
increased this number to 4, and then 8, and then 64, etc. And they gleefully speculated
that once this technology was perfected, there might be no practical limit to
the number of junctions that could fit on one chip. Well, actually there is a limit, and we'll get to that
later, but it's a pretty big limit.
I just bought a thumb drive that has 8 Giga bits of memory on a single
chip, and the salesman asked me if I didn't want a bigger one.
In
1982, when I bought my first computer, I believe it had 4 k of ram, supplied by
4 chips. But about every 18
months, the number of junctions per chip has doubled, and the speed has also
doubled, while the price, in real dollars, has continually dropped. But there is a price to pay. Every time you put more junctions on a
chip, you reduce the surface area of each individual junction, and that of
course reduces both the time required to switch that junction, the power
required to do so. So as computers
became smaller and cheaper, they
become faster and more energy efficient.
But eventually we will reach the point where the energy required to
switch each junction is less that the spikes of energy from quantum effects
within the silicon crystal itself.
At that point, the signal
to noise ratio is zero, and no operations are possible. But even before you reach the quantum
threshold, you get signal to noise ratio problems from the normal "thermal
noise" of all current carrying parts. Whenever
current flows through a resistor
(or anything that has resistance) a certain amount of radio noise is
generated. I'm not sure whether this noise is actually "generated," or whether there is merely a random fluctuation of conductivity occurring at radio frequencies. But in any case, this noise effect is proportional to the temperature of the conductor. Temperature
is simply a measurement of the average kinetic energy of the moving molecules,
and since this radio noise is occurs as a result of the friction of moving molecules
within the material, the higher the temperature, the higher the noise. When I worked at the satellite ground station, the front end of our RF receiver was a
parametric amplifier that was cooled to about a hundred degrees or so above absolute
zero as a way of reducing this
noise level. But cooling our
personal computers to near absolute zero will not be practical.
Mind
you, the energies involved in this "noise temperature" are
exceedingly minute. But there are
also power line spikes and stray ground currents, even if you own an elaborately
filtered power supply. And there
are also minute electrical disturbances caused by cosmic radiation. I repeat; all of these disturbances are exceedingly minute. But as we have made our computers circuits ridiculously tiny, then ridiculously tiny
disturbances anywhere in the circuit are all that's required to disrupt them. And that, my friends, is why your
modern desktop will freeze up--and your old TRS-80 didn't.
No comments:
Post a Comment