Stranger than fiction?

Thorsten

Active member
Joined
Dec 7, 2013
Messages
785
Reaction score
56
Points
43
If it's a trueman show, then that approach makes sense. If it's an attempt at socio-historical modeling, then you pretty much have to let the system interact with itself. If you start abstracting that, you're already limiting the result.

The whole idea of the experimental verification was to look for abstraction artifacts, so if it's not abstracted on some level, that's kind of pointless.

Now we're left to talk about what's the right abstraction level for a social simulation - which I don't know - but I dare say subatomic level is too deep.

Also, note that you're (simulated to be) smarter than the people talking about certainty of afterlife because of everyone being (again and again) in a simulation is much more likely than being in the real thing - obviously there's different kids of simulations possible...
 

jedidia

shoemaker without legs
Addon Developer
Joined
Mar 19, 2008
Messages
10,882
Reaction score
2,133
Points
203
Location
between the planets
but I dare say subatomic level is too deep.

Oh sure, but you would have to simulate the people.

note that you're (simulated to be) smarter than the people talking about certainty of afterlife

Well, pitty, because I actually belong to the second group :lol:
 

Thorsten

Active member
Joined
Dec 7, 2013
Messages
785
Reaction score
56
Points
43
Oh sure, but you would have to simulate the people.

Yes - but do you have to make them more complex than chatbots? History seems to show that groups of people are far more predictable than individuals for instance.
 

jedidia

shoemaker without legs
Addon Developer
Joined
Mar 19, 2008
Messages
10,882
Reaction score
2,133
Points
203
Location
between the planets
Yes - but do you have to make them more complex than chatbots?

Probably somewhat more if you want representative results. We have been coding up generalised models for the formation of solar systems since the late sixties, and they turned out mostly wrong because they ignored microinteractions. We probably won't get anything really accurate until we get down to the particle level.

But even if you wrote a generalised model, where would be the point of putting one fully realised human into the mix?
 

Thorsten

Active member
Joined
Dec 7, 2013
Messages
785
Reaction score
56
Points
43
There's a good way to test whether your simulation is good enough - you just keep adding details and re-run it till the output doesn't change.

It'd be wasteful to use quantum chromodynamics to compute wave development in oceans - and we know that because already hydrodynamics can do it.

But even if you wrote a generalised model, where would be the point of putting one fully realised human into the mix?

Excellent observation - a socio-historical model would not have any obvious need to do that. So we may venture the guess that if what we see is a simulation, it's not that kind.
 

jedidia

shoemaker without legs
Addon Developer
Joined
Mar 19, 2008
Messages
10,882
Reaction score
2,133
Points
203
Location
between the planets
a socio-historical model would not have any obvious need to do that.

On the flip-side, we have the trueman-show scenario, where the entire simulation is for the benefit of realising one or a few minds fully and make them the protagonists in some overblown entertainment venture.
Here the problem crops up that, since I have to assume that I am that fully realised mind, it doesn't seem to be going too well. I'd expect the show to get cancelled any minute now, because really, my life is pretty pedestrian over all :lol:

So either we're all in a fully realised simulation, for which there apparently doesn't seem to be much need, we're all in reality, or there is some other purpose for which a generalised simulation with one fully realised mind is necessary I can't currently think of.
Along comes occam's razor...
 

Andy44

owner: Oil Creek Astronautix
Addon Developer
Joined
Nov 22, 2007
Messages
7,620
Reaction score
7
Points
113
Location
In the Mid-Atlantic states
This is a great read on the subject. Apparently some "look-how-visionary-and-forward-I-am" Silicon Valley types want to find a way to destroy the simulated world we all live in and break us out of it. Elon Musk is suspected to be one of them (surprise).

I guess when rich people decide to change the world, starting by removing optical drives from our computers and forcing us to use their cloud, and topping it off by destroying the universe, the rest of us just better get on the bandwagon or get left behind! ;)

Tech Billionaires Want to Destroy the Universe

http://www.theatlantic.com/technolo...-a-false-notion-of-reality/503963/?yptr=yahoo
 

birdmanmike

Active member
Joined
Jan 20, 2016
Messages
104
Reaction score
0
Points
31
Location
High Peak
But if it's a simulated world, we are likely either simulations as well -or some sort of fleshy creature or construct inhabiting it (what about all other "living" things?).

Ergo, if you destroy the simulated world, if we are sims we go with it - or we "break out" and somehow enter a "real" world/universe which might not be something we can live in. Then again the simulated world might be out in "space" and we'll all die anyway - try breathing vacuum . . .

(sorry off topic, but I've not forgiven the people who took away optical drives - try using the clouds when you have a sub 1.8 Mb download speed and 0.6 up! OK on my home-build super-computer, but first thing with my Mac was buy an external optical)(home build for my Orbiter and flight sim simulated worlds :hmm: )

disappears up own fundamental . . .
 

Urwumpe

Not funny anymore
Addon Developer
Donator
Joined
Feb 6, 2008
Messages
37,624
Reaction score
2,343
Points
203
Location
Wolfsburg
Preferred Pronouns
Sire
I remember arguing just the opposite - that parallelizing tasks works only for rather particular problems and is usually worse than having the n times the computation speed serially.

Let me put it that way: If you have a task, that needs to be done really quickly, a parallel solution will almost always exist and will always scale well. (Unless you would have to decide on something at nearly the speed of light. But there you need no CPUs, but specialized circuits anyway)

You need to search very long to find such special problems.

Most contemporary problems are parallel though. Even in avionics.
 

Thorsten

Active member
Joined
Dec 7, 2013
Messages
785
Reaction score
56
Points
43
Let me put it that way: If you have a task, that needs to be done really quickly, a parallel solution will almost always exist and will always scale well.

What is that statement based on?

* Equation of motion solving is usually a serial computation as next state update requires the last state to be finished.

* Raytracing is a serial problem as the object reflected needs to be ready by the time the ray hits the mirror

(Didn't have to search a lot to come up with these... )A simulation of any environment usually needs both of these, so...

There are problems (like Monte-Carlo integration) that really parallelize well, but once you need states synchronized because things aren't independent, parallel solutions become a mess quickly.
 

Urwumpe

Not funny anymore
Addon Developer
Donator
Joined
Feb 6, 2008
Messages
37,624
Reaction score
2,343
Points
203
Location
Wolfsburg
Preferred Pronouns
Sire
* Raytracing is a serial problem as the object reflected needs to be ready by the time the ray hits the mirror

Wrong, actually. A single simple ray traced is a serial problem at first glance.

But it is made of multiple parallel problems (eg, intersection tests) and if you raytrace a whole image, you again have a problem that is easy to divide into partitions.

I am really surprised you use Raytracing as an example against parallelism, because it is such a classic parallel example, that it was the one of my highlights as apprentice at the German aerospace agency back then:

Buying 4 PCs a few months earlier than needed for the new employees and use them for building up a PVM cluster with Povray as demo application to show that the expensive old number cruncher offer of our rival computer department is not without cheaper alternatives...
 

Thorsten

Active member
Joined
Dec 7, 2013
Messages
785
Reaction score
56
Points
43
You are certainly right that you can parallelize sub-problems and speed things up that way - to a degree.

I'm using Raytracing as an example because graphics cards render based on massively parallel computing and hardware accelerated vector operations - and if you want to bring things like shadows or reflections into real time 3d, you have to massively fake it -while they just come out of raytracing effortlessly.

So I think it's actually a fairly good example for the limits of parallel computing.
 

Urwumpe

Not funny anymore
Addon Developer
Donator
Joined
Feb 6, 2008
Messages
37,624
Reaction score
2,343
Points
203
Location
Wolfsburg
Preferred Pronouns
Sire
while they just come out of raytracing effortlessly.

So I think it's actually a fairly good example for the limits of parallel computing.

Have you EVER implemented a raytracer? I did. It is everything but without effort, finding effective data structures for one CPU is one problem, parallelizing the algorithms for making use of multiple CPUs another.

The modern GPUs fake it, but they do it in a pretty good way, compared to what you need to calculate similar graphics with a raytracer in real time.
 

Thorsten

Active member
Joined
Dec 7, 2013
Messages
785
Reaction score
56
Points
43
Um... I didn't say implementing a raytracer is without effort - but things like shadows and reflections come out of what a raytracer does without additional effort because of the way it treats the properties of light.

In a real-time pipeline, you have to do a lot of song and dance to get faked shadows.

Sorry if that was not clear enough - I sure didn't want to belittle raytracing coders.
 

Urwumpe

Not funny anymore
Addon Developer
Donator
Joined
Feb 6, 2008
Messages
37,624
Reaction score
2,343
Points
203
Location
Wolfsburg
Preferred Pronouns
Sire
Um... I didn't say implementing a raytracer is without effort - but things like shadows and reflections come out of what a raytracer does without additional effort because of the way it treats the properties of light.

In a real-time pipeline, you have to do a lot of song and dance to get faked shadows.

Sorry if that was not clear enough - I sure didn't want to belittle raytracing coders.

Well, the disagreement we have is maybe pretty trivial. You talk about a single cast ray, reflected n times. Yes, that alone gives great results for a single pixel. And is only parallel for supporting tasks.

But a picture has more than one pixel. Also more complex shading models result in more rays to be created per reflection. There you quickly have the reason why a raytracer will have a much better quality than a classic GPU renderer, but is also much slower.
 

Thorsten

Active member
Joined
Dec 7, 2013
Messages
785
Reaction score
56
Points
43
Also more complex shading models result in more rays to be created per reflection. There you quickly have the reason why a raytracer will have a much better quality than a classic GPU renderer, but is also much slower.

A classic GPU renderer can in fact only do eye-vertex-light (i.e. not compute even a single ray reflected from another surface) - because the shaders can't look up any other geometry but the vertex, the eye and the light.

With environment maps or deferred techniques some of these can be sort of overcome, but I completely agree with your statement about multiple rays per reflection.
 

Urwumpe

Not funny anymore
Addon Developer
Donator
Joined
Feb 6, 2008
Messages
37,624
Reaction score
2,343
Points
203
Location
Wolfsburg
Preferred Pronouns
Sire
A classic GPU renderer can in fact only do eye-vertex-light (i.e. not compute even a single ray reflected from another surface) - because the shaders can't look up any other geometry but the vertex, the eye and the light.

With environment maps or deferred techniques some of these can be sort of overcome, but I completely agree with your statement about multiple rays per reflection.

Yeah. And if you use more buffers and more GPU memory and more specialized shaders, you can do much more magic with GPGPUs. Sadly, this means you have less GPU memory for textures in first place, which quickly limits your performance.
 

Thorsten

Active member
Joined
Dec 7, 2013
Messages
785
Reaction score
56
Points
43
Actually, upon closer reflection, you're right - raytracing is not a good example, because it relies too much on GPU rendering pipeline vs. technicalities of how raytracing works and gets murky and lost in details quickly.

I guess the general criterion for simulation of physical systems is - you can parallelize domains which do not exchange significant information during a timestep.

So a classic rendering pipeline (without additional camera passes or buffers) treats all triangles and pixels as independent entities which can't light each other, hence it parallelized easily.

In fluid dynamics, two domains which are separated by speed of sound times timestep can be computed independently (interestingly, that makes hypersonic aerodynamics easier than subsonic, because in the first case information flow is always one way - fluid cells never really care what's downstream, and left-right domains tend to disconnect.

On the other end, a quantum field is characterized by non-local coherence, i.e. all spacetime points always know what happens at all other spacetime points, so it never parallelizes.

So I guess whether a problem parallelizes well is a question on what is considered relevant (if heat radiation is relevant, hypersonic aerodynamics no longer separates domains). what the timestep in question is and what the mechanism of information exchange is.

The cruder the approximation, the better it parallelizes, the less fast information is exchanged, the better it parallelizes...

Coming back to the original question, I guess that kind of argues against running a high-fidelity simulations massively parallel.
 

Urwumpe

Not funny anymore
Addon Developer
Donator
Joined
Feb 6, 2008
Messages
37,624
Reaction score
2,343
Points
203
Location
Wolfsburg
Preferred Pronouns
Sire
Actually, the trick for that is to limit the exchange of data between partitions (and different CPUs) to a minimum. The art of that task is called "partitioning".

If you ever look closely at a pre-processing a subsonic CFD simulation, it does exactly that. It splits the volume of virtual air in the virtual wind tunnel into tetrahedron cells, and then distributes those cells into 64 more or less even volumes which have a minimal number of touching cells. And then 64 multicore/quad CPU nodes run amok on the data. About 8 hours later, you have a mail in your inbox, that your calculation is done ... and somebody must pay for the 2048 CPU hours.
 

Artlav

Aperiodic traveller
Addon Developer
Beta Tester
Joined
Jan 7, 2008
Messages
5,790
Reaction score
780
Points
203
Location
Earth
Website
orbides.org
Preferred Pronouns
she/her
I'm not sure how the graphics stuff comes into play here - if we are simulating a universe, we sure as hell won't be doing it using the methods from computer games.


Think of a CPU emulation, an ur-example of a problem that can not be parallelized in any way.
Every instruction depends, on average, on the output of the previous one - no matter how powerful your computers are, you won't have a CPU running faster that one of them.

However, imagine that you work at Intel, and need to find out whether the new CPU design, in form of a map of a billion logic gates, would work.

This might sound like the same problem, but it actually parallels really well - you split the RTL between N CPUs arranged in a grid, with each CPU exchanging border data with it's neighbors, and you can run that design at close to N times speed-up (up to a point of saturation), compared to having only one CPU.


Now, think of a simple thing called Game Of Life.
It's a perfectly local cellular automaton, which can be scaled arbitrarily large by adding computers to the cluster simulating it.
You might thing that it's really wasteful to simulate quadrillions of empty cells, but there is a neat trick called hashlife, which lets you compute only the parts that are changing, ignoring all that empty space.

We don't really know what the outside context might look like. It might very well be much larger than our universe (which is, incidentally, mostly empty space), and it might very well run on different laws of physics.

But if i were to simulate a universe, i would design something similar to GoL - a perfectly local, simple set of rules that, over great scale, produce very complex patterns.
 
Top