By William Poole-Wilson, managing director and Stephen Hunter, head of AI at WILL+Partners
As a design practice fascinated by the practical deployment of AI, we can’t help but be reminded of the early days of the personal computer, as this also had a high impact on the design of workplace. Back in the 1980s, most computers were giant, expensive mainframes that only large companies and universities could afford. But then, a few visionary companies started putting computers on desktops, from workplaces, to schools and finally homes. Suddenly, computing power was accessible to everyone but it needed different spaces. A ‘bicycle for the mind,’ it allowed us to go further and faster than ever before, unleashing waves of creativity and productivity that have transformed every aspect of our lives.
Today, we’re approaching a similar inflection point with AI. Just like those early PCs, AI has moved out of the research labs and into our daily lives. But we’re only scratching the surface of what’s possible; the next revolution of human-machine collaboration is about to arrive.
Most People Currently Use AI Sub-Optimally
The launch of ChatGPT was the iPhone moment for AI. With 1m users in just five days and 100m in two months, it was one of the most successful product launches in history. But did you know that when you upload a long document to ChatGPT the AI doesn’t actually read the whole document nor look at any of the images? Instead, the file is converted to plain text (often suffering conversion errors due to misinterpretations of the layout), cut up into smaller ‘chunks’, then a simple search is performed, and only the most relevant chunks are sent (along with your prompt) as the ‘context’ for the AI model to process. Imagine trying to summarise a book by looking at a disjointed selection of its torn-out pages. This process is known as Retrieval Augmented Generation (RAG). It will always have a place, but for most use-cases RAG is self-evidently inferior to the alternative: Long Context.
Long Context Models
Starting from early models like GPT-1 in 2018 with a maximum context length of 512 tokens, we’ve seen exponential doubling with each new model release, reaching 4k when ChatGPT launched in 2022, and now all the way up to 2 million tokens with Google’s Gemini 1.5 Pro. This represents a 4000-fold increase in just six years. This rapid expansion in context length allows these AI models to process and understand much larger amounts of text in a single pass, leading to more coherent, knowledgeable, and capable responses… as long as you supply the model with the right information! ChatGPT Personal and Team accounts currently only have a 32k context length, so we advise you switch to a longer context option like ChatGPT Enterprise (128k), Claude (200k), or Gemini (2M). Also, most AI platforms don’t make it obvious to the user what happens when you upload a document. W+P advice is this: pre-process your documents by converting them to Markdown and you will see significant LLM performance improvements.
Intelligence Too Cheap to Meter
Ephemeralization, coined by Buckminster Fuller in 1938, is the ability of technological advancement to do “more and more with less and less until eventually you can do everything with nothing”. We’re seeing this play out in real-time with the development of AI models like GPT-4o mini. Just two years ago, the cost per token of the most advanced language models was orders of magnitude higher than it is today. But thanks to the tireless work of researchers and engineers, we’ve seen a 99% reduction in that cost, while simultaneously delivering a quantum leap in the intelligence and capabilities of these models. As models continue to improve, becoming more efficient and more capable, the possibilities for what we can build with them are becoming truly limitless.
We will soon be in a world where this intelligence is too cheap to meter. Where everyone has an AI tutor that can adapt to their unique learning style, that can help them master subjects at their own pace. Where doctors (and consultants!) can use AI to diagnose diseases earlier and more accurately, to develop personalized treatment plans for each patient. Where businesses can use AI to solve complex problems, to create new products and services that we can’t even dream of today. That future is almost here, and it’s going to be incredible.
Towards SuperIntelligence
- Chatbots
- Reasoners
- Agents
- Innovators
- Organisations
OpenAI recently unveiled a five-level classification system to track progress toward artificial general intelligence (AGI) and have suggested that their next-generation models will be Level 2, which they call “Reasoners.” This level represents AI systems capable of problem-solving tasks on par with a human of doctorate-level education. The subsequent levels include “Agents” (Level 3), which can perform multi-day tasks on behalf of users, “Innovators” (Level 4) that can generate new innovations, and finally “Organisations” (Level 5), i.e. AI systems capable of performing the work of entire businesses. As we progress through these levels, the potential applications and impacts of AI will expand dramatically.
Next-generation AI models are effectively embargoed until after the US election on 5th November, but expect to see significant gains in reasoning ability and intelligence when they arrive, with each of the main providers currently training and testing models more than an order of magnitude larger than the current largest & most-intelligent models.
Navigating the Challenges
As with any powerful new tool, AI also brings with it profound challenges and responsibilities. One significant concern is the potential for AI to perpetuate or even amplify biases present in the data it is trained on, leading to unfair or discriminatory outcomes. AI bias is already prevalent and it is crucial we learn how to teach AI to discern bias. Not so easy. AI could also be used maliciously, e.g. to create deepfakes or spread misinformation. There are also legitimate concerns about the impact of AI on jobs and the workforce, but equally how it improves and inspires that workforce.
It’s our responsibility to be aware of these possibilities and to work proactively to mitigate them. This means ensuring that the AI systems we design and implement are transparent, accountable, and aligned with ethical principles. It means working closely with stakeholders across the organisation to identify and address potential risks early on. And it means staying up-to-date with the latest research and best practices in AI ethics and governance.
Shaping the Future of AI
As architects, we have a chance, right now, to shape the trajectory of how this technology is used in the workplace. To ensure that it is deployed not just with capability, but with wisdom, and in spaces that suit us. To put it on a path that enhances and empowers the human experience, rather than diminishing or replacing it.
This is a monumental task, and it will require the best minds of our generation working together. We need the dreamers and the visionaries, but we also need the ethicists, the philosophers, the artists and the humanists. Because the questions raised by AI are not just technical, but profoundly human.
If the history of computing has taught us anything, it’s that the most transformative innovations often start small, with a few crazy dreamers who see the world not as it is, but as it could be. If we approach this AI revolution with that same spirit of creativity, empathy, and optimism, there’s no limit to what we can achieve.