Science [Discussion] How much time to 1THz Processor

Your prediction

  • Next year

    Votes: 1 5.6%
  • 10 years (2025)

    Votes: 7 38.9%
  • Between 2025-2065

    Votes: 6 33.3%
  • Next Century (2100-2199)

    Votes: 0 0.0%
  • 23rd Century

    Votes: 0 0.0%
  • Milleniums

    Votes: 0 0.0%
  • 10,000 years

    Votes: 0 0.0%
  • We will never make a processor that powerful

    Votes: 4 22.2%

  • Total voters
    18

SpaceEagle

GEP Reborn Project Manager
Joined
Feb 22, 2013
Messages
142
Reaction score
0
Points
16
Location
Chicken Coop
How many years/centuries/decades do you think when humanity will develop the first 1 THz computer processor? Discuss. Please do not include me in the discussion. Please be serious.

My current predictions is sometime between ~2025-2065 TBH



:cheers:, SpaceEagle
 

Pipcard

mikusingularity
Addon Developer
Donator
Joined
Nov 7, 2009
Messages
3,709
Reaction score
39
Points
88
Location
Negishima Space Center
How about the first time that a 1 THz processor can be reasonably obtained by PC gamers?
 
Last edited:

Artlav

Aperiodic traveller
Addon Developer
Beta Tester
Joined
Jan 7, 2008
Messages
5,790
Reaction score
780
Points
203
Location
Earth
Website
orbides.org
Preferred Pronouns
she/her
Not going to happen.
At such frequencies many concepts in circuitry cease to work in the usual way.
Everything have inductance and radiates, minute parasitic capacitances becomes shorts, dielectric heating, switching losses that scale quadratically, and so on.

Simply speaking, the rules start to break down just like Newtonian physics break down once you go fast enough or get fat enough.

I don't think processors will get much faster than now in terms of consequential instructions per second (unless someone invents a completely new switching element).
But there is an unlimited field of growth for parallel processing.

How about.... it's already happened:
Nope.
There is a dozen orders of magnitude between an amplifier chip and a CPU.
 
Last edited:

Urwumpe

Not funny anymore
Addon Developer
Donator
Joined
Feb 6, 2008
Messages
37,628
Reaction score
2,345
Points
203
Location
Wolfsburg
Preferred Pronouns
Sire
I think we should also forget the damn clock rate. Technically speaking, a clock rate is no sign of speed at all for a CPU, because it only defines how fast data is transferred from A to B. That we still talk much about it is just a bit of laziness, because building CPUs without a common clock rate is possible, but not easy at all.

But in logical consequence, because of the limitations that Artlav stated, the only way to make CPUs faster is to make their performance more and more independent of the clock rate up to the point that there is no clock rate at all. Today, that's mostly done by more pipeline stages, special pipeline branches and multiple cores.

In the future, we might get something like general purpose GPUs becoming CPUs - and our old classic concept of a Von Neumann architecture gets finally obsolete.
 

Ravenous

Donator
Donator
Joined
Jul 19, 2013
Messages
275
Reaction score
0
Points
0
Location
sitting at the pointy end
Another thing about the clock rate... I wonder if we could just build a simple analogue computer (hard add/subtract/multiply stages chained together) and claim that is a very fast processor instead? I suppose it depends on the bandwidth of the amplifiers. I have no idea what the speed would be for a good modern op-amp...?

(And yes, I know it wouldn't be a software programmed computer any more.)
 

kamaz

Unicorn hunter
Addon Developer
Joined
Mar 31, 2012
Messages
2,298
Reaction score
4
Points
0
1 THz frequency corresponds to 0.3mm wavelength.

This means that the CPU die has to be smaller than that (at least by one order of magnitude) so you can get propagation delays under control. So the actual CPU has to be sized 30um X 30 um max (more like 3um X 3um).

If you can make your transistors 10nm across (you wish...) then you AT MOST have 9 million transistors at this speed and more likely some 90 thousand transistors.

To put that in perspective:

Pentium III ~ 10 million transistors (1997)
80486 ~ 1 million transistors (1989)
MC68k ~ 68k transistors (1979 (!))

So the situation is that you can make a part of your CPU work at 1THz, but it is going to be a small part. Or, you can make several small CPUs on the same die and do multiprocessing.

But it gets worse, because at these speeds Si based process does not really work, so you have to use InGaAs or SiGe, and this is expensive. At this point, you have an economic question whether a single very fast InGaAs CPU is better than a battery of cheap Si-based CPUs working at lower frequency, for the same money.

If you want to play with THz imaging (which is what was linked above) then you use InGaAs process to make the THz oscillator, amplifier and mixer which downconverts the signal to some sensible frequency (say 100MHz), and run the IF signal to a second Si die where you have the rest of the system.

---------- Post added at 05:26 PM ---------- Previous post was at 05:22 PM ----------

I have no idea what the speed would be for a good modern op-amp...?

You can get op-amps with gain-bandwidth product above 1GHz nowadays:

http://www.analog.com/en/products/a...dback-amplifiers/ad8003.html#product-overview
 

HarvesteR

Member
Joined
Apr 22, 2008
Messages
386
Reaction score
15
Points
18
Sounds technically possible, but not much good in practice then. You'd be better off with a slower clock and a smarter architecture and/or multiple cores to raise the effective frequency to an equivalent of what a single THz unit would do... Without the complications of having wavelenghts that are smaller than the circuits they run on.

IMO, clock speeds to gauge a CPU's worthiness is about as outdated these days as using VRAM alone to measure the worthiness of a GPU. It worked back in the day when those were the bottleneck factors I guess, but these days there are other areas to improve that will get you more oomph for the effort.

Cheers
 

N_Molson

Addon Developer
Addon Developer
Donator
Joined
Mar 5, 2010
Messages
9,290
Reaction score
3,258
Points
203
Location
Toulouse
I don't think processors will get much faster than now in terms of consequential instructions per second (unless someone invents a completely new switching element).
But there is an unlimited field of growth for parallel processing.

What about bio-components ? Of course, having to feed the CPU might seem a bit... alien... but in several areas, neural cells (and I don't mean human ones) will beat any electromechanical device. Not always of course, as any modern computer beats an human when dealing with maths... So maybe the best of the two worlds ?

Well, not for tomorrow anyways...
 

Artlav

Aperiodic traveller
Addon Developer
Beta Tester
Joined
Jan 7, 2008
Messages
5,790
Reaction score
780
Points
203
Location
Earth
Website
orbides.org
Preferred Pronouns
she/her
but in several areas, neural cells (and I don't mean human ones) will beat any electromechanical device.
Uh, neurons are running at tens to hundreds of Hertz, no giga-, mega- or even kilo-.

The only reason the brain is much faster at certain tasks than modern computers is that it is obscenely parallel.

A trillion neurons do the same job working all "instructions" at once that a CPU would have to do instruction by instruction at a mere few billion instructions per second.

Give the brain a comet's trajectory among many bodies to calculate, and it will struggle for a while.
Give a CPU a picture to dissect into patterns, and it will struggle for a while.

It's a different class of hardware.
 

Soheil_Esy

Fazanavard فضانورد
Joined
Apr 5, 2015
Messages
744
Reaction score
19
Points
18
5oth anniversary of the Moore’s Law

Sunday, April 19, will mark the 5oth anniversary of a phenomenon that has transformed industry and society more than any other in modern times: Moore’s Law.

On that date in 1965, Fairchild Semiconductor and later Intel cofounder Gordon Moore published an article in Electronics magazine that observed that the number of transistors on silicon chips had doubled each year for the previous decade. He also predicted that trend would continue for another decade.

The simplified version of this law states that processor speeds, or overall processing power for computers will double every two years. A quick check among technicians in different computer companies shows that the term is not very popular but the rule is still accepted.

To break down the law even further, it specifically stated that the number of transistors on an affordable CPU would double every two years (which is essentially the same thing that was stated before) but ‘more transistors’ is more accurate.

If you were to look at processor speeds from the 1970’s to 2009 and then again in 2010, one may think that the law has reached its limit or is nearing the limit. In the 1970’s processor speeds ranged from 740 KHz to 8MHz; notice that the 740 is KHz, which is Kilo Hertz – while the 8 is MHz, which is Mega Hertz.

From 2000 – 2009 there has not really been much of a speed difference as the speeds range from 1.3 GHz to 2.8 GHz, which suggests that the speeds have barely doubled within a 10 year span. This is because we are looking at the speeds and not the number of transistors; in 2000 the number of transistors in the CPU numbered 37.5 million, while in 2009 the number went up to an outstanding 904 million; this is why it is more accurate to apply the law to transistors than to speed.

Gordon Moore, Mar 2015:

Some things will change. We won’t have the rate of progress that we’ve had over the last few decades. I think that’s inevitable with any technology; it eventually saturates out. I guess I see Moore’s Law dying here in the next decade or so, but that’s not surprising.

http://spectrum.ieee.org/computing/hardware/gordon-moore-the-man-whose-name-means-progress

http://www.mooreslaw.org/
http://www.forbes.com/sites/michaelkanellos/2015/04/13/moores-law-turns-50-seven-things-to-remember/
 

RGClark

Mathematician
Joined
Jan 27, 2010
Messages
1,635
Reaction score
1
Points
36
Location
Philadelphia
Website
exoscientist.blogspot.com
I was interested to note that the advance in processor speeds seemed to be slowing, diverging from what might be expected by Moore's law. However, a new technology may ramp up speeds as well as storage density again, the memristor:

Machine Dreams.
To rescue its struggling business, Hewlett-Packard is making a long-shot bid to change the fundamentals of how computers work.
By Tom Simonite on April 21, 2015
http://www.technologyreview.com/featuredstory/536786/machine-dreams/


Bob Clark
 

jedidia

shoemaker without legs
Addon Developer
Joined
Mar 19, 2008
Messages
10,882
Reaction score
2,133
Points
203
Location
between the planets
in several areas, neural cells (and I don't mean human ones) will beat any electromechanical device. Not always of course, as any modern computer beats an human when dealing with maths...

This is the entire problem between Bio-hardware and electronic hardware, right there: It's a completely different architecture for completely different purposes. We built computers to do tasks that our brains aren't good at. In exchange, computers suck at things our brains are good at.
A brain is not a calculator, nor is it software in the common sense, it is self-modifying hardware that works mostly by associative connections. It can compute in the sense a computer does, but to do so efficiently requires a completely different wiring of neurons than the tasks we actually use it for. As such, clock-rate is pretty much irrelevant to the brain: Its speed does not stem from doing fast computations, its speed stems from being able to avoid most computations by the virtue of having a wildely connected and associative database with mapped inputs and outputs with the querry structures built directly into the hardware.

Point in case: People really good at crunching numbers in their brains do so by heavy use of "lookup tables", the content and access of which they had to tediously train to establish the neccessary logic connections.
 
Last edited:

Urwumpe

Not funny anymore
Addon Developer
Donator
Joined
Feb 6, 2008
Messages
37,628
Reaction score
2,345
Points
203
Location
Wolfsburg
Preferred Pronouns
Sire
It can compute in the sense a computer does, but to do so efficiently requires a completely different wiring of neurons than the tasks we actually use it for. As such, clock-rate is pretty much irrelevant to the brain: Its speed does not stem from doing fast computations, its speed stems from being able to avoid most computations by the virtue of having a wildely connected and associative database with mapped inputs and outputs with the querry structures built directly into the hardware.

Point in case: People really good at crunching numbers in their brains do so by heavy use of "lookup tables", the content and access of which they had to tediously train to establish the neccessary logic connections.

I would not use the word compute for the human brain (or the brain of any other animal). The brain estimates. That is what it is really good at and what the "hardware" had been designed for.

The brain of a dog does not compute the shortest path to fetch a ball you had thrown into the water. It estimates it, based on previous experiences. (And that with results that are extremely close to numerical calculations)

The advantage of estimating in comparison to computing: Computing fails if the decision has to be done without known input factors or known rules. A brain can even get usefully good results if it has to work with unknown factors and unknown rules, by estimating multiple possible scenarios at once and selecting the one that achieves the personal goals best. Timid? Your brain will prefer the estimates that put you to safety. Ambitious? Your brain will prefer the most radical path to get what you want.

Of course, a brain can also estimate wrong. But its much harder to fool an intelligent brain than a smart computer.
 

jedidia

shoemaker without legs
Addon Developer
Joined
Mar 19, 2008
Messages
10,882
Reaction score
2,133
Points
203
Location
between the planets
I would not use the word compute for the human brain (or the brain of any other animal).

That was actually what I was trying to say. A brain can compute. As soon as it is able to count it can solve basic arithmetics without further input. It just sucks big time at it. Even simple multiplication works orders of magnitude faster by memorizing a lookup table. While for a computer, Basic operations work way faster by computing them from scratch every time.
 
Last edited:

Urwumpe

Not funny anymore
Addon Developer
Donator
Joined
Feb 6, 2008
Messages
37,628
Reaction score
2,345
Points
203
Location
Wolfsburg
Preferred Pronouns
Sire
A brain can compute.

Well, that's what I mean with estimate.

You are not really computing it, you are just using multiple estimation processes to get an approximation that may sometimes be the exact calculation result.

Like doing an addition in your head... its not like you have a special addition unit in your brain. You just iteratively approximate the sum by estimating step by step using "look-up tables" in your brain.
 

Thunder Chicken

Fine Threads since 2008
Donator
Joined
Mar 22, 2008
Messages
4,379
Reaction score
3,309
Points
138
Location
Massachusetts
Another thing about the clock rate... I wonder if we could just build a simple analogue computer (hard add/subtract/multiply stages chained together) and claim that is a very fast processor instead? I suppose it depends on the bandwidth of the amplifiers. I have no idea what the speed would be for a good modern op-amp...?

(And yes, I know it wouldn't be a software programmed computer any more.)

I want mechanical!

 
Top