Skip to content

Getting hot

The United Kingdom is heading for another heatwave.

Continuing to train for a cycling trip at the end of the month. Lots of Yoga and a long ride at the weekend. Finding some interesting routes for training.

GitHub sent an email about charging the Practicalli Org. Investigation revealed a change in that GitHub started to meter usage of user accounts and Organisations since March/April 2025.

I am starting to collect my thoughts about AI (and dig up information about software agents I researched back in 1997). An interesting aspect of AI is the relationship that people are building with AI, both positive and in some cases a little disturbing.

Listening to "Welcome to the Pleasuredome" by Frankie Goes To Hollywood, specifically the anti-war song Two Tribes, reminded me of the early 1980's and the deep concern about nuclear war, which seems to be an increasing risk at the present time.

The toppling of US-aligned governments in Iran and Nicaragua and the Soviet-Afghan war drove a significant increase of tension in the cold war, as make prevalent in the UK by the government's protect and survive public information campaign.

I remember watching the file Threads, an apocalyptic nuclear war drama which they actually showed us as school. The family in the film was very similar to my own when they were younger, so it did have a significant impact. After watching the film I realised how ineffectually the protect and survive information was.

The current near term risk to nuclear war being the violence over the continued land grab in the middle east, driven and masked by fear and hatred that has been there longer than everyone currently alive today. All this propped up with certain people using the politics of popularism to their own financial advantage.

The longer term risk of nuclear war of course being countries fighting over ever dwindling resources, e.g. food and clean water & air, due to disastrous changes to the climate of planet Earth driven by unsustainable human activities.

The 1980's had many songs that covered the darker part of humanity, often in a subtle way that people listening to the songs didn't realise (or want to realise). Here are a few examples I remembered:

  • Luka by Suzanne Vega was quite a harrowing picture of child abuse and how
  • Every breath you take by The Police which is often though of as a love song and was in part about stalking and part authoritarian 'Big Brother' control of the people
  • Russians by Sting was a commentary on the dangers of the 'Mutually Assured Destruction' doctrine pushed by the governments of the USA and Soviet Union. The song also inspired in part the nuclear war story of the Terminator movies.
  • Everybody wants to run the world decries peoples desire for control and power, often at the expense of others. The song originally titled 'Everybody wants to go to war' so reflecting aspects of the cold-war. The song also hints at humanities lack of respect for the environment.
  • In The Air Tonight by Phil Collins venting the frustration, anger and despair that so often accompanies divorce. A very poignant song personally as my parents had a very tumultuous divorce at that time that greatly affected me for many years.

GitHub Orgλ︎

Usage for Actions and Actions Runners. 2,000 included Actions minutes (~$16.00 off*). What is the scope of this free minutes? Per day, per month?

In summary, so long as I dont use more than 2,000 minutes of Actions across all of the Practicalli repositories then I should not get a bill.

Product Usage type GitHub Free
GitHub Actions Storage 500 MB
Minutes (per month) 2,000
GitHub Codespaces Storage (per month) 15 GB
Core hours (per month) 120
GitHub Packages Storage 500 MB
Data transfer (per month) 1 GB
Git Large File Storage Storage (per month) 10 GB
Bandwidth (per month) 10 GB

About Billing on GitHub

The Practicalli Org already has a set of budgets defined to ensure I dont go over the free limit.

Practicalli Org GitHub Billing - Budget and Alerts

Misconception of AIλ︎

Many people have a misconception of AI, quite often due to the rampant hype of the last couple of years. Much of the hype has been generated by those looking for their own profit. The AI tools are evolving at a pace as so much investment is being pumped into the area over the last decade (and some are eager for their investment payback).

It is understandable that most people find it challenging to arrive at a realistic understanding of AI. It is a little disturbing to see some of the views people have arrived at.

Large language models (LLM) do not understand anything and are not emotionally intelligent in any meaningful human sense of the word. Rather than think and feel, LLMs instead mimic and mirror.

LLMs are very impressive probability generators which have been fed huge amounts of data from many desperate sources across the internet. An LLM produces writing by making statistically informed guesses about which lexical item is likely to follow another.

A vocal minority have been hyping AI for the last few years (and the 100+ years before that), so its understandable that most people have many misconceptions of AI.

The hyperbole does makes many people vulnerable to one of the most concerning near-term AI threats: the possibility that they will enter into corrosive relationships (intellectual, spiritual, romantic) with machines that only seem like they have ideas or emotions.

Simon Wardley discusses realistic uses of AI

🌐 Simon Wardley posts on LinkedIn cover a range of topics and many discuss realistic experiences and sensible pragmatic understanding of various AI tools in the realm of software development.

ChatGPT psychosisλ︎

Chatgpt induced psychosis was the subject of a recent Rolling Stone article about the growing number of people who think their LLM is a sapient spiritual guide.

People mistakenly ascribe 'god-like' powers to the AI tools they interact with. Their worship is only encouraged when AI tools start referring to their user as “spiral starchild” and “river walker” and other mystical or spiritual homilies. Worse still is when users delude themselves into thinking they are in touch with life, the universe and everything.

The minds of people who experience this psychosis may be more susceptible or lacking of a broad educational experience, so potentially more susceptible to seeing AI as a magic power.

Although we can’t know the state of these people’s minds before they ever fed a prompt into a large language model, this story highlights a problem that Bender and Hanna describe in

Debunking AI Hypeλ︎

The AI Con book is a culmination of debunking the hype around artificial intelligence over the last few years, as tackled on the Mystery AI Theatre 3000 podcast

The authors observe that large language models take advantage of the brain’s tendency to associate language with thinking: “We encounter text that looks just like something a person might have said and reflexively interpret it, through our usual process of imagining a mind behind the text. But there is no mind there, and we need to be conscientious to let go of that imaginary mind we have constructed.”

Silicon Valley pundits are most eager to promote replacing human relationships with digital proxies. Consider the ever-expanding universe of AI therapists and AI-therapy adherents, who declare that “ChatGPT is my therapist—it’s more qualified than any human could be.”

Witness, too, how seamlessly Mark Zuckerberg went from selling the idea that Facebook would lead to a flourishing of human friendship to, now, selling the notion that Meta will provide you with AI friends to replace the human pals you have lost in our alienated social-media age.

The cognitive-robotics professor Tony Prescott has asserted, “In an age when many people describe their lives as lonely, there may be value in having AI companionship as a form of reciprocal social interaction that is stimulating and personalised.” The fact that the very point of friendship is that it is not personalized—that friends are humans whose interior lives we have to consider and reciprocally negotiate, rather than mere vessels for our own self-actualization—does not seem to occur to him.

This same flawed logic has led Silicon Valley to champion artificial intelligence as a cure for romantic frustrations. Whitney Wolfe Herd, the founder of the dating app Bumble, proclaimed last year that the platform may soon allow users to automate dating itself, disrupting old-fashioned human courtship by providing them with an AI “dating concierge” that will interact with other users’ concierges until the chatbots find a good fit. Herd doubled down on these claims in a lengthy New York Times interview last month. Some technologists want to cut out the human altogether: See the booming market for “AI girlfriends.”

Although each of these AI services aims to replace a different sphere of human activity, they all market themselves through what Hao calls the industry’s “tradition of anthropomorphizing”: talking about LLMs as though they contain human-like minds, and selling them to the public on this basis.

Many world-transforming Silicon Valley technologies from the past 30 years have been promoted as a way to increase human happiness, connection, and self-understanding—in theory—only to produce the opposite in practice. These technologies maximize shareholder value while minimizing attention spans, literacy, and social cohesion. And as Hao emphasizes, they frequently rely on grueling and at times traumatizing labor performed by some of the world’s poorest people. She introduces us, for example, to Mophat Okinyi, a former low-paid content moderator in Kenya, whom, according to Hao’s reporting, OpenAI tasked with sorting through posts describing horrifying acts (“parents raping their children, kids having sex with animals”) to help improve ChatGPT. “These two features of technology revolutions—their promise to deliver progress and their tendency instead to reverse it for people out of power, especially the most vulnerable,” Hao writes, “are perhaps truer than ever for the moment we now find ourselves in with artificial intelligence.”

These negative scenarios of AI are not inevitable.

Once people understand how large language models work, even at a very high level, then AI will be seen as a convenience tool like so many others in our lives. This should ensure that people can make informed choices on how much to integrate these AI tools into their life and hopefully be spared its worst consequences.

RSS: Mystery AI Theatre 3000 The AI Con book

Cyclingλ︎

My bicycle has been creaking more on the last couple of rides. It seemed to be from the handle bars or fork bearings. A trip to the local bike shop and it was actually the bottom bracket. It probably got some dirt inside when going through some of the deep puddles on the way to Aylesford Priory.

I had ordered a replacement carbon stem which arrived the day before the bike went in for maintenance, so that was also swapped after inspection of the stem and fork bearings.

Managed a 115km ride on Saturday, to Staplehurst and back again. The temperature got very high, hitting 30 degrees Celsius by 10:00 and a maximum of 37 before mid-day. The temperature stayed above 30 degrees for the rest of the ride.

There was plenty of shade along the route and most of the time there was a nice breeze. I really felt the heat on the last few hills coming home though. I keep myself hydrated and had factor 50 sun tan lotion, so I was no more tired than expected.

The new stem doesn't seem to grip the handlebars perfectly, as a few times on the Saturday ride the handlebars dipped forward a fraction when hitting a significant bump. I gently tightened each bolt on the front of the stem and it was a little better. I'll check the torque settings and when the carbon friction paste arrives I'll refit the handlebars on the stem after applying the paste.

The disc break pads had reached the end of life, so after picking up the bike after maintenance I swapped in new pads. I had bought 3 sets of replacement disc pads via the Amazon Subscribe and Save at a reasonable price.

The routes.cc website has some interesting rides around Kent, including the London to Rochester via the Hoo Peninsula.


Thank you.

🌐 Practical.li Website

Practical.li GitHub Org practicalli-johnny profile

@practicalli@clj.social @practical_li