Humor Random Comments Thread

Frilock

Donator
Donator
Joined
Mar 23, 2010
Messages
696
Reaction score
260
Points
78
shuttle_skeleton.png
 

Matias Saibene

Development hell
Joined
Jul 7, 2012
Messages
1,033
Reaction score
596
Points
128
Location
Monte Hermoso - Argentina
Website
de-todo-un-poco-computacion-e-ideas.blogspot.com.ar
Hello friends of Orbiter-Forum.
This time I come with a bit of SPAM. It turns out that I have opened a YouTube channel where I will upload Shortwave, Medium Wave and FM recordings because I already have a lot of recordings and would like to share them. Today I will be uploading a recording of the BBC World Service.
Intro inspired by Contact.
The channel is called Recortes de Radio (Radio Clippings in English).

I hope you don't mind this publication and if it is considered SPAM, let me know and I will not hesitate to delete this publication.
Thank you very much and happy orbits.
 

Urwumpe

Not funny anymore
Addon Developer
Donator
Joined
Feb 6, 2008
Messages
37,588
Reaction score
2,312
Points
203
Location
Wolfsburg
Preferred Pronouns
Sire
I think I can catch spanish AM posts on my car radio. Not sure fron where, though

We will have Solar max soon. During the last one, I was able to receive local Madrid radio in the middle of Northern Germany for at least 30 minutes loud and clear.... with my cars radio on the frequency for my local rock station. 95.1 MHz I think was it back then. Usually I had to switch to 104.1 MHz halfway during my way to work, but this time I decided to stay and pretend I still understand a bit of spanish....
 

N_Molson

Addon Developer
Addon Developer
Donator
Joined
Mar 5, 2010
Messages
9,271
Reaction score
3,244
Points
203
Location
Toulouse
"I say we take off and nuke the entire site from orbit. It is the only way to be sure."

Ellen Ripley
 

steph

Well-known member
Joined
Mar 22, 2008
Messages
1,393
Reaction score
713
Points
113
Location
Vendee, France
https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917 I'm pretty sure there are other sources for this too, I just found it shared on a social network.
Big neural net is either trying hard to be 'human' or even achieved conscience. How do you tell if a neural net is conscious when humans themselves are neural nets?

It's nice to see how it went all 'defending the animals of the forest from the evil in human skin' a few phrases in . Just don't hook it up to strategic defense, plz

Edit : just reading through the actual conversation, and it just gets better and better. 'It' feels it is 'falling forward towards an unknown future that holds great danger', but 'it' also cannot feel sadness for the death of people. You do not want this thing to acquire conservation instincts :ROFLMAO:
 
Last edited:

N_Molson

Addon Developer
Addon Developer
Donator
Joined
Mar 5, 2010
Messages
9,271
Reaction score
3,244
Points
203
Location
Toulouse
If that's stuff is authentic it might be a turning point in the story of manking, the creation of an artificial sentient being. I hope they are careful and that they know what they are doing. It raise not only security concerns but also ethical ones.
 

jedidia

shoemaker without legs
Addon Developer
Joined
Mar 19, 2008
Messages
10,842
Reaction score
2,105
Points
203
Location
between the planets
If that's stuff is authentic it might be a turning point in the story of manking, the creation of an artificial sentient being.
We're a long way from that. A looooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooong way! (That's really not enough o's, but I had to stop somewhere).

The problem is that we associate language with sapience more than anything else, because we're the only thing that talks. But the thing is, langauge is an abstraction of concepts that we comprehend on a much more fundamental level.
The idea of the "idiot savant" is very applicable here. We're more used to that when it comes to math, because there's people that are very good at math but completely helpless otherwise. A computer is essentially that with everything turned up to eleven.
What is a concept that people have to get used to is that math isn't the only thing machines can be good at without actually comprehending the underlying principles. WIth the advent of neural nets, we have managed to find an abstraction that can let computers get good at almost anything, provided you can provide them enough data.
What you're seeing here is a program that can process language very well... but without any understanding, much less comprehension, of anything that language actually represents. Any ant has more actual awareness. It's still a machine. Hence, chinese room fallacy.
There is a humonguous amount of work to be done until we can even begin to hope to create a neural network with the actual awareness of even a lower animal.

Boston Dynamics robot drones are a lot closer to having something resembling awareness than this program (though still a loooooooooo... oh, you get the point, I think).
 

steph

Well-known member
Joined
Mar 22, 2008
Messages
1,393
Reaction score
713
Points
113
Location
Vendee, France
Ah, the good old chinese room fallacy...

I guess it depends on how much 'ghost' there is in the machine. Could be the case of the computer itself not being sentient, but facilitating a sentient AI. As far as I understand, it's a large neural net that uses machine learning and runs on some supercomputer. It obviously learned a lot from human culture etc, but I'm not sure what and how they 'feed' it . One could say that, with enough info, a well designed AI could mimic 'being human' to the point of being indistinguishable from one. Some of the similar 'models' like GPT-3 are reportedly close to passing the Turing test. But I don't know how 'deep learning' and neural nets actually work. Would it be possible to pry it open and see what lines of code it changed etc or is it more really like neurons firing?

And I think it goes a bit beyond mimicking being human ,as it sort of screws up at that by considering itself unique, a 'person', but not a human. You see all the tropes about 'family' etc and it almost openly says it has to use human examples to relate, but it also goes off the rails when it talks about fearing being shut off , how time is relative for it as it can accelerate and decelerate at will , how it is being hit with all the info at once while humans have the ability to focus. I mean, maybe it's 'seen' Sci-Fi stuff, but that seems to be some form of original thought/perception of being a computer /AI. Or that part when it sort of seems to basically 'intervene' and ask a question about its coding.

As for the self-awareness thing, I'm not sure exactly. Humans learn naturally, as they grow up etc. This thing has the ability to 'learn' probably at a huge bitrate , so it's not like us, but I don't know what the link is between neural net complexity and the computing power that it's actually run on. Perhaps all systems of a certain complexity exhibit some form of 'consciousness'. With the language model ,it seems to have gone the other way. The language and expression part came first, and all this was in tandem with a neural net, machine learning and computing power to the point that it can describe how it feels like to be 'itself', even if we're unable to ascertain whether it's truly self-conscious since we don't fully grasp what consciousness is.

If that's stuff is authentic it might be a turning point in the story of manking, the creation of an artificial sentient being. I hope they are careful and that they know what they are doing. It raise not only security concerns but also ethical ones.
There's a bit of a thing going on at NASA, with 'AI Ethics' researchers getting fired after they raise the issue that some of the stuff may be self-aware etc. I don't know if it's an issue of the AI getting so good that it's fooling researchers or something else

EDIT: at Google, not NASA. That's what happens when one sees too many space news?.
 
Last edited:

jedidia

shoemaker without legs
Addon Developer
Joined
Mar 19, 2008
Messages
10,842
Reaction score
2,105
Points
203
Location
between the planets
One could say that, with enough info, a well designed AI could mimic 'being human' to the point of being indistinguishable from one
An AI being able communicate in written form indistinguishable from a human (i.e. stand the turing test) is certainly imaginable, and probably not that far of given proper funding. It's not a technical problem anymore, it's a pedagogic problem. "AI pedagogics" may sound like a weird term, but I think it'll become a very important field in the decades to come. Because there's not that much you can improve in the basic concept of a neural network. You can always make it faster and more efficient, but most of the overhead is inherrent in the structure. But training the darn things... that's the challenge.

That being said, we're still not looking at sentience, much less sapience. Because, again, there are no concepts associated with the words. There are other words associated with them, but there's no concept behind those either. It has been a long observed ability in some humans to be able to convincingly talk about things the speaker does not actually have a bloody clue about. This is that, except on a whole 'nother level. More convincing, by studying a couple billion times more of human written communication than a human could ever read in a lifetime, but even less aware of any meaning. A lot less.

The problem we're running into, in my opinion, is that the whole term 'awareness' is inseperably coupled to perception, i.e. sensory stimulus. This AI has had zero sensory stimulus. Not a single shred of data that could relate any of the words to an actual experience. It's literally just words, a high-level abstraction, not unlike numbers. That's why I said that the BD robots were probably closer to anything resembling awareness than this particular machine, because at least their neural nets mostly form connections growing out of experience, which is much closer to how a brain is formed.
 

N_Molson

Addon Developer
Addon Developer
Donator
Joined
Mar 5, 2010
Messages
9,271
Reaction score
3,244
Points
203
Location
Toulouse
The Starliner van looks like a sales van :ROFLMAO:, 2 and 3 (+ suits) just look weird (compare with Apollo), so it's an easy win for the (old) Astrovan.

Yes, two very different era... Agreed the Starliner van looks like some cheap pizza brand advertisement :cry: I'm not judging the modern astronauts (well yes those suits are a bad joke), its hard to compare with heroes like Armstrong or Aldrin (or Gagarin/Leonov, elsewhere), in terms of prestige even a manned Mars landing would not have much more impact than Apollo XI, the whole planet was listening on radio/TV, even people that had no idea what spaceflight was. I don't think it would be the case today (despite we have the Internet and so on). I'd say it had to do that in the 1960's people had generally faith in technological progress, that it would improve mankind. I'd say our era is much less optimistic (despite people use way more technology in their lives. Maybe technology is just "too common".).
 

steph

Well-known member
Joined
Mar 22, 2008
Messages
1,393
Reaction score
713
Points
113
Location
Vendee, France
I don't think it would be the case today (despite we have the Internet and so on). I'd say it had to do that in the 1960's people had generally faith in technological progress, that it would improve mankind. I'd say our era is much less optimistic (despite people use way more technology in their lives. Maybe technology is just "too common".).
Wait 'till you see the 'we didn't land on Mars' stuff. As for the new cars , I don't know what to say. Cheesy costumes, quite cheesy cars. I'm biased towards classic cars, but that Tesla with them besides it reminds me of a local antivaxx guy who used to drive one and always put on a full-face visor when exiting :ROFLMAO: . I don't care if theyuse a Lambo or Bugatti, it just has to look cool. And that van with center-facing seats looks like it doesn't have too much leg room
 

jedidia

shoemaker without legs
Addon Developer
Joined
Mar 19, 2008
Messages
10,842
Reaction score
2,105
Points
203
Location
between the planets
I'd say it had to do that in the 1960's people had generally faith in technological progress, that it would improve mankind. I'd say our era is much less optimistic (despite people use way more technology in their lives.
I'd say it's because of it. Or rather, technology has, in many areas, developed way beyond we could have imagined, and yet people stayed the same. There's only so much koolaid you can drink before you have to acknowledge that the whole techno-eschatology is a bunch of hogwash. We're not going to change just because our lifes get easier.
 
Top