Time to Take a Breath

By Paul Preiss, of Iasa Global
Building and growing the architect profession.

The outcomes of the Paris discussions on AI were less than exhilarating at least politically. The general trend of AI is stoking the twin fears of joblessness/business failure and extinction outcomes, whether those are war, economy, or environment. This is directly competing with the equally grandiose rhetoric of wealth creation and utopian worlds. And of course as a Reddit reader, of a beautiful set of concocted and convoluted conspiracy theories (one of my guilty pleasures for Sunday reading). But 25 years around architects has taught me a critical lesson, take a deep breath and stand back from the problem. Whether this is code, a painting, a system malfunction, a stakeholder or a relationship. Or in this case ‘Global-Thermal Nuclear War‘ or its apparent AI equivalent. 😉

The world is going through a major growing phase right now. I don’t believe this is specifically about Artificial Intelligence techniques like LLMs. I am less worried about AI killing the planet or putting our children in pods to power its world domination than I am about how we learn to work together to adapt to a world, and ultimately worlds, where technology is as fundamental to our existence as food, clothing, and shelter. This is an existential crisis is about the adaptation of high technology to human existence and society.

Predicting the Future

Back around 2005 I spoke at a software architecture conference and described the possibility that one day software would be to blame for a plane crash and that we would face a major problem on that day. I was chided and laughed at a bit then. I had no idea that one technology crisis would be related to airplanes but I saw the direction of human dependence on technology and the sheer lack of coordinated societal mechanisms for dealing with it, especially technology that impacts our actual lives and that sprang to mind.

Since then I have predicted that an ongoing escalation would emerge in humanity in both culture and in terms of societal power structures. In 2018 or so I was asked to speak on the rising influence of AI in Sweden at one of our ITARC conferences. One of the points I made was the need for establishing an authentic implementation of trust and legal methods for navigating it. I will speak more about this at some point but the simple fact is we have very few societal mechanisms for managing full and partial trust, identity, liability, and authorization in a technology world much less an AI-enabled technology world. And our technology equivalents are as deeply flawed. Should your AI Agent happen to hack into a school to improve your daughter’s grades? It is an effective means of improving her grades at least without a LOT of thinking about this topic. This lack of corporate, legal, cultural, and architectural methods for controlling technology is as much to blame for the hype, the fear, the uncertainty and the possible impacts from our current AI-powered hype cycle as any real impacts.

The Past Always Repeats

This is not the first time humanity has navigated a species-level knowledge and abilities evolution. And no I don’t mean the internet, that was simply the opening salvo in the technology adaptation we are currently experiencing.

Instead look to human use of tools, the cotton gin, the formation of states, democracy, written language, medicine, industrialization (obviously in no particular order) and similar transitions that caused humanity to adapt its social, cultural, financial and legal institutions to cope with a changing set of abilities.

And to cut to the chase we have mostly made the right decisions. And we are likely to do so again. Oh I know, I know, doom and gloom and clickbait about megalomania, destruction and robot wars sell papers, but they don’t get us closer to a solution.

In general, humans evolve systems that are mostly good for humans in the area they live and according to cultural norms they can abide by, with lots of ‘yeah but what about?’ examples of that not being the case. In general, humans from other cultural backgrounds or who live in different areas disapprove of many of these adaptations and accepted norms. Outside of those examples are the extreme cases of dictatorial regimes driven by the power of a line of autocrats. That one is a distinctly human pattern that I hope someday we can truly eradicate. I will come back to this risk in a moment as technology may exacerbate this in places if we don’t take certain precautions.

The Changes We Need

I posted a blog a year ago about the way technology is treated when it emerges. I made the claim that if we treated medical approaches in the same way, many or even most of us would have been killed by a rogue medical practice. That was not tongue-in-cheek. Our current societal approach to technology is roughly speaking, ‘sure let’s try that on a bunch of people and see what happens.’ This is a great thought experiment for the changes we will make to technology for it to serve humanity (instead of the scarier Matrix-like version).

The same thing has happened slowly with industrial technology, what we like to call ‘Operational Technology’ sometimes. In the late 1800s and early 1900s it was all speed, investment and chaos. Coal mines, children workers, indentured servitude, environmental horrors, the rich controlling political outcomes, greedy people making money on ‘Growth’, ‘Freedom’, ‘Wealth for All’, and similar approaches to manufacturing, science, and energy creation dominated the world’s stage. And well it has taken us a long time to learn the lessons from that. And obviously humans have not fully learned those lessons on a global scale. One would argue that the high technology craze is the extension of that learning process.

But a few exceptional lessons have been learned in places where possibly destructive innovation meets population:

  1. Continued wealth creation involves a continuous growth cycle that is tempered by virtuous behaviors. – we don’t allow children workers or people to be damaged, dangerous waste must be minimized and circular models will outcompete in the long run, etc.
  2. When humans en masse are impacted negatively a crisis emerges between industry power and political power. We see this quite starkly right now in the ‘tech bros’. Historically political power representing long-term population health has finally dominated, sometimes sadly at the cost of many lives.
  3. Classes of professionals, laws, regulations and societal norms are established to protect the layperson and the average business from abuses of innovation. While far from perfect, these are mostly completely trusted within day-to-day management of the topic area, up to and including things like a global crisis like COVID and the unbelievable and wonderful performance of our medical professionals and even medical businesses!
  4. Different societies take different approaches globally leading to continued innovation, friction, difficulties, and learning.
  5. The guidance of the field and industry become balanced and humans face their next wave.

I will write a post soon making the steps we need to take to adapt to the technology pressure, for example, technology business and impact R&D, administration, licensure, and personal liability, more clear.

Paul
Paul Preiss

All of the above I believe, is extremely likely and necessary at this point. I founded Iasa with the belief that someday the world would need trained and licensed digital professionals for exactly this reason.

The Rising Pressure

The level of pressure is definitely rising and will likely continue to do so. As this pressure rises so will the continued failures of our current system of systems in dealing with it. By that, I predict some if not all of the following will occur before the vast majority of humans begin demanding real adaptations.

  1. We will experience more and more and faster and faster catastrophic business and systems failures. Crowd Strike was an appetizer. The Irish healthcare system meltdown a drink before dinner. Major security breaches. Identity theft. Rising technical debt-related failures. Worsening travel and supply chain conditions. Potentially there will be deaths attributable directly to technology failure. Major ecological disasters like oil spills are possible. Power and heat outages. Plus the consistent rise of the inconvenient, annoying and ‘enshittification’ that is growing daily.
  2. We will experience more and more mass societal confusion and anger. The social media bans, AI bans, job loss, state-level technology escalations, radical speech, and acts of aggression in small and large scales will escalate as more impact or more accurately fear of impact increases. We will begin to see technology vandalism and similar acts of aggression towards a technology future.
  3. The economy will fluctuate based on politics and technology coming to a head, investment cycles begin to deteriorate as consumerism rests on trust and feelings of safety.
  4. We may see the rise of at least one technology-driven regime. Hopefully, we will be able to limit these due to the inherently vulnerable nature of the technology advantage.
  5. More small and possibly large stuff goes here but I hate negative predictions.

Ha and there I said I would avoid doom and gloom predictions. But then I don’t believe we will allow many examples of these to occur. Technology is deeply vulnerable to disruption (power and network lines, hacking, knowledge worker walkouts, etc) and honestly the impacts will affect the rich as well as the poor so I believe we will act relatively quickly.

Getting Specific

Recently I spoke to a CTO whose organization had gone from daily or even hourly releases of their product back to monthly (with the possibility of longer when desirable). He said, ‘it has so greatly increased the quality and enjoyment of work that it is noticeable in the hallways’. Not all innovation is necessary. This is because there really was no business need to drive employees to this level of performance. And without that it created a negative speed-driven culture, massive confusion, and lots and lots of errors. Sound familiar? It is. It is a microcosm for what I describe above. We are currently racing towards solutions that honestly don’t sound that great. As one person described it (anyone know who?) “I wanted AI and robots to do the laundry and the cleaning so I could paint and write music, not AI that could write music and paint, so I could do laundry and dishes”.

Here are some questionable goals that a very few humans seem eager to achieve that we may want to address.

  1. Replacing employment and work with machines in any hugely significant way is a complete unknown. We literally do not know what a society without work would look like.
  2. Heavy, strong metallic devices in the home and on the roads that can act independently (or remotely controlled) may be very dangerous to human lives.
  3. Robotically enhanced human conflict.
  4. Technology modified humans.
  5. Centralization and urbanization of technology.
  6. Technology controlled decisions, workflows or actions.
  7. Making technology accessible based on class and wealth.
  8. Lack of regulated methods for technology rollout to living creatures.
  9. Lack of highly educated and ethically controlled career paths in technology.
  10. Technology controlled societal functions (electricity, food supply, education, health).
  11. Centralized technology decision making and data access.
  12. Investment methods in technology and the market.
  13. Recognition of technology as a legal entity and definitions of slavery (in case of consciousness).
  14. Trust, partial trust and digital self-hood.
  15. A digital bill-of-rights.
  16. Oh and plenty more…

Remember, as a professional association leader I am working to establish a real base of authority for educated, experienced, and ethically bound professionals to represent us and the digital health society wants to achieve. I believe that is the shortest and cheapest adjustment to a radical new set of human achievements and the awesome benefits and risks they represent.

I should say, none of these opinions represent or are meant to be mistaken for the beliefs of Iasa members, the Iasa board of directors or Iasa partners. They are my own.