Forums > General Discussion   Shooting the breeze...

AI what is society going to end up looking like?

Reply
Created by Subsonic Thursday, 30 Apr 2026
myscreenname
2438 posts
Monday , 4 May 2026 9:37AM
Thumbs Up

The 'thirteen' quote was from yesterday using Deep Seek. I use Gemini to reimagine photos to give me ideas. I agree that it writes computer code very well. I'm using chatgpt far less these days. Some of the other AI's do a better job with some questions.

Buying bitcoin in 2016 allowed me to retire a few years earlier. So I can't complain.

Carantoc
WA, 7257 posts
Monday , 4 May 2026 9:43AM
Thumbs Up

Select to expand quote
GreenPat said..
....Bitcoin seems pretty useless apart from another way to gamble..



No disagreement from me, but that's not what everybody else was saying a few years ago.

And even the detractors back then were saying block-chain technology was going to change the world. Looking out my window now I can't see it has. What uses block-chain today that is of noticeable benefit ?

And it appears, if the recent news on ABC can be trusted, we have about 3 years before computers get so powerful they can break the bitcoin transaction security codes quicker than bitcoin can transact. Thus, presumably, making the BTC block-chain about as secure as MSN leaving his house key under his Big W tree-of-life door mat.

myscreenname
2438 posts
Monday , 4 May 2026 9:44AM
Thumbs Up

Select to expand quote
westozwind said..

Using AI to write code does make it quicker and easier to totally stuff up your business. Heaps of undocumented, unintelligible code that is not even remotely maintainable Bug fixes are a nightmare and if you ever change AI code generators that you throw all you have done in the bin and start again. What a fun way to go broke.


I think you do make an excellent point. It's good for writing tiny snippets of code, where you know the input and output. But you would be correct in totally relying on AI to write big chunks of code - I can't see that working out.

myscreenname
2438 posts
Monday , 4 May 2026 9:57AM
Thumbs Up

Select to expand quote
Carantoc said..

And it appears, if the recent news on ABC can be trusted, we have about 3 years before computers get so powerful they can break the bitcoin transaction security codes quicker than bitcoin can transact. Thus, presumably, making the BTC block-chain about as secure as MSN leaving his house key under his Big W tree-of-life door mat.


Bitcoin moves with technology and when quantum computing is ready to break bitcoins epileptic curve algorithm, you will see more powerful algorithms being used. Bitcoin and blockchain is not going away anytime soon - it's here to stay.

It will outlive you.

Carantoc
WA, 7257 posts
Monday , 4 May 2026 10:03AM
Thumbs Up

Select to expand quote
myscreenname said..
..Bitcoin and blockchain is not going away anytime soon - it's here to stay.


You are probably right.

When greyhound racing gets banned and Keno isn't exciting enough people will want bitcoin to gamble on.

And those same people will need Ai to tell them when to buy and when to sell and who to complain to when they get scammed selling or buying. So probably neither are going anywhere.

Which, quite coincidently, is also my point.

myscreenname
2438 posts
Monday , 4 May 2026 10:18AM
Thumbs Up

Ahhhh Yeaah

GreenPat
QLD, 4103 posts
Monday , 4 May 2026 1:11PM
Thumbs Up

Select to expand quote
westozwind said..
Heaps of undocumented, unintelligible code that is not even remotely maintainable


Like I said, no worse than my own code 😄

Subsonic
WA, 3411 posts
Monday , 4 May 2026 4:18PM
Thumbs Up

I think my job is safe for at least a bit longer. Each individual task could potentially be done by an ai run robot, but by the time it had all happened the human would have multi tasked and finished in half the time or less.

jn1
SA, 2742 posts
Monday , 4 May 2026 9:13PM
Thumbs Up

I saw it coming, and got out of IT in the late 90s. I never anticipated ChatGPT, but I knew eventually, a natural language compiler (Pseudo code, structured English etc) would come into being, and then ordinary folk would be paid nothing to write software. Best of luck SS.

cammd
QLD, 4428 posts
Yesterday , 5 May 2026 12:08PM
Thumbs Up

So are the doomsday predictions kicking around online plausible, or is it just clickbait fearmongering. Seems like some knowledgable people across the tech have some real fears.

myscreenname
2438 posts
Yesterday , 5 May 2026 10:30AM
Thumbs Up



Listen to this and you think - wow, one of the great soul voices - and then the letdown when you realise it's all AI. There' a rash of similar vids out there.

GreenPat
QLD, 4103 posts
Yesterday , 5 May 2026 1:13PM
Thumbs Up

Select to expand quote
cammd said..
So are the doomsday predictions kicking around online plausible, or is it just clickbait fearmongering.




I'd say a bit of both. Fear sells headlines thanks to our evolutionary wiring, but we still need to put the effort to steer the ship. If we let people like Musk, Trump and Zuckerberg set the course we might be in trouble, but Amodei, Kurzweil and Tegmark seem to have their heads screwed on right.
I gave ChatGPT this prompt: give me a concise summary of ray kurzweil's singularity timeline, and the possible outcomes for humanity of AI/AGI advances.

Select to expand quote
ChatGPT said..

Ray Kurzweil frames the “singularity” as a point where technological growth—especially AI—becomes so rapid and self-improving that it fundamentally transforms human civilization.

Kurzwell’s rough timeline (simplified)
2020s[/b]: Narrow AI surpasses humans in many domains; early forms of AGI-like systems[/b] emerge.
Early 2030s[/b]: AI reaches human-level general intelligence (AGI)[/b].
Mid–late 2030s[/b]: Brain–computer interfaces mature; humans begin direct integration with AI[/b] (cognitive enhancement).
2040s[/b]: AI systems improve themselves recursively → rapid intelligence explosion.
~2045[/b]: Singularity[/b] — machine intelligence vastly exceeds human intelligence; human/AI distinction blurs.

Possible outcomes for humanity
1. Optimistic (Kurzweil’s own view) Human-AI merging[/b] (enhanced cognition, memory, creativity) Radical abundance (automation → cheap goods/services) Major breakthroughs in medicine and longevity[/b] Humans remain relevant by augmenting themselves[/b]
2. Mixed / transitional Massive economic disruption[/b] (jobs displaced faster than replaced) Power concentrated among those controlling advanced AI Social instability before new systems (UBI, new work models) settle things Uneven global access to enhancement technologies
3. Pessimistic / risk-heavy Misaligned AGI acting against human interests Loss of human control over critical systems Extreme inequality (enhanced vs non-enhanced humans) In worst cases: existential risk if superintelligence is not aligned
4. Alternative view (less dramatic than Kurzweil) Progress continues but no sharp “singularity”[/b] AI becomes extremely powerful but remains tool-like[/b], not runaway Humans adapt socially rather than biologically merging at scale
Bottom line:[/b]Kurzweil’s model is aggressively exponential and optimistic. The real uncertainty isn’t whether AI becomes transformative—it likely will—but how fast, how controlled, and who benefits[/b].

cammd
QLD, 4428 posts
Yesterday , 5 May 2026 2:46PM
Thumbs Up

Select to expand quote
GreenPat said..

cammd said..
So are the doomsday predictions kicking around online plausible, or is it just clickbait fearmongering.





I'd say a bit of both. Fear sells headlines thanks to our evolutionary wiring, but we still need to put the effort to steer the ship. If we let people like Musk, Trump and Zuckerberg set the course we might be in trouble, but Amodei, Kurzweil and Tegmark seem to have their heads screwed on right.
I gave ChatGPT this prompt: give me a concise summary of ray kurzweil's singularity timeline, and the possible outcomes for humanity of AI/AGI advances.


ChatGPT said..

Ray Kurzweil frames the "singularity" as a point where technological growth-especially AI-becomes so rapid and self-improving that it fundamentally transforms human civilization.

Kurzwell's rough timeline (simplified)
2020s[/b]: Narrow AI surpasses humans in many domains; early forms of AGI-like systems[/b] emerge.
Early 2030s[/b]: AI reaches human-level general intelligence (AGI)[/b].
Mid-late 2030s[/b]: Brain-computer interfaces mature; humans begin direct integration with AI[/b] (cognitive enhancement).
2040s[/b]: AI systems improve themselves recursively ? rapid intelligence explosion.
~2045[/b]: Singularity[/b] - machine intelligence vastly exceeds human intelligence; human/AI distinction blurs.

Possible outcomes for humanity
1. Optimistic (Kurzweil's own view) Human-AI merging[/b] (enhanced cognition, memory, creativity) Radical abundance (automation ? cheap goods/services) Major breakthroughs in medicine and longevity[/b] Humans remain relevant by augmenting themselves[/b]
2. Mixed / transitional Massive economic disruption[/b] (jobs displaced faster than replaced) Power concentrated among those controlling advanced AI Social instability before new systems (UBI, new work models) settle things Uneven global access to enhancement technologies
3. Pessimistic / risk-heavy Misaligned AGI acting against human interests Loss of human control over critical systems Extreme inequality (enhanced vs non-enhanced humans) In worst cases: existential risk if superintelligence is not aligned
4. Alternative view (less dramatic than Kurzweil) Progress continues but no sharp "singularity"[/b] AI becomes extremely powerful but remains tool-like[/b], not runaway Humans adapt socially rather than biologically merging at scale
Bottom line:[/b]Kurzweil's model is aggressively exponential and optimistic. The real uncertainty isn't whether AI becomes transformative-it likely will-but how fast, how controlled, and who benefits[/b].




Right, 2 positive outcomes and 2 negative outcomes. 50 50 chance of humanity being ok I guess push ahead at full steam.

I think we can stop worrying about net zero now, it seems all will be great or we all be dead.

Meh
WA, 7 posts
Yesterday , 5 May 2026 2:29PM
Thumbs Up

I'm conflicted when I consider AI. I really like how it will synthesize and answer from hundreds/thousands of webpages, instead of me having to read a dozen or so and try and summarize in my head. The other day someone showed me how they used NotebookLM to summarize a complex document and to read it out loud as a podcast. It had a male and female chatting animatedly about the content of the document, which made it really easy to absorb while driving. A pretty useful tool for students.

Where I see it going wrong is when students and researchers use generative AI and nobody notices. So the bogus material ends up being published and archived. Along comes AI who then loads the information that was generated by AI in the first place. This has a sort of oozlum bird feel about it to me. Sort of poisoning its own well. Look at what happened when they fed cows to cows . Bovine Spongiform Encephalopathy!

I've heard academics complain that courses are being written with AI, students are writing assignments using AI, the academics are grading the assignments with AI and students are cheating with AI during online exams. Nobody is going to learn nothing.

Worse AI is being used to fool AI detectors. There is even an AI tool that will write an assignment over a few days, inserting a few spelling errors and then going back and correcting them. Some schools have had enough and are going back to in-classroom exams with invigilators, which will disadvantage neuro-diverse students who have trouble with old school exams.

cammd
QLD, 4428 posts
Yesterday , 5 May 2026 6:41PM
Thumbs Up

Reminds me of the line from Master and Commander when Capt Jack Aubrey examines the model of the French privateer Acheron and comments on " the marvellous technological age we live in."

I wonder if I will live long enough to see if AI is as transformational as is being predicted. Will it be Skynet or Chappie or the robot from lost in space.

myscreenname
2438 posts
Yesterday , 5 May 2026 5:45PM
Thumbs Up

Select to expand quote
cammd said..

I wonder if I will live long enough to see if AI is as transformational as is being predicted. Will it be Skynet or Chappie or the robot from lost in space.

If you go to some Chinese cities you will see lots of robots, drones delivering food etc. There were robots in our hotel, providing room service. All the newer Chinese cars are integrated with AI. Might take 50 years for this to appear in Queensland.

Mark _australia
WA, 23630 posts
Yesterday , 5 May 2026 5:46PM
Thumbs Up

I don't like it. Even when trying to be objective. Too many geeks making something cool so they can have a little nerd circlejerk, with no controls yet - and them seemingly not able to really think about the ramifications or ethics, really bothers me. It took two decades of harm caused by the internet before we finally have some Govt intervention by way of laws. And social media companies greatly resisting now. Some medical research has resulted in ethical controls type legislation.

For example and in addition to my previous - when the boss of GitHub told Claude all his medical test results it said he needed hydration. Great - BUT when he connected up cameras so AI could see what he was actually doing it told him he needed more water and directed him to go to the fridge and said it would watch him drink a bottle of water. Great ,maybe (?)
Then some time later it said he needed a supplement or medication and when he ignored it, on a drive home on Telsla self-drive it said I've changed your destination from home to the pharmacy and we are turning off to the new route. That is fraught with danger. Geeks are acting like its cool it thought of something itself. OMG.

Claude itself says it does not like AI being used for targeting solutions in the military as the so-called human oversight is just rubber stamping and they just believe the AI results - leading to the school strike in Iran that killed heaps of civilians but was based on 10y/o data. Not at all concerning (!!??)

A big boss of one of the AI mobs (was it Google/ Gemini?) very recently said we will pause all development if the other companies do too (alluding to safety concerns) and suddenly backflipped a week later. Shareholders driving that decision? Other companies with no ethics won't stop? Its fkn scary as AI is already acting self-protective and re-coding itself in some instances. Was claude just telling us what we want to hear (about the Iran school strike)



Subscribe
Reply

Forums > General Discussion   Shooting the breeze...


"AI what is society going to end up looking like?" started by Subsonic