Skip to main content

Four predictions for AI in 2026

Mark Lynskey
Front-end developer at BNZ

2025 saw a massive uptake in the adoption of AI into our work and lives. Here are my top four predictions for 2026. It will be a defining year in the age of Large Language Models (LLMs) in particular, and it’s on track to be marked as a turning point not just in information and technology, but in how we live our lives.

Close-up of a Person Holding a Smartphone Displaying ChatGPT

Photo: Solen Feyissa | Pexels

1. Rapid developer adoption

When it comes to developer adoption of AI in the past couple of years, developers have mostly been in two camps, with very little inbetween. They’re either all-for-it, or heavily against it. Those that are for it tout the following benefits:

  • Faster prototyping
  • Faster debugging
  • It’s “the future”, and others will get left behind
  • You can do more with less personnel

And those that are against it, warn of the following concerns:

  • An AI-generated prototype is not a production-ready application
  • AI-generated code creates large amounts of tech debt
  • AI-generated code is not clean, nor maintainable long-term
  • The use of AI encourages laziness and stifles learning/growth
  • Coding was never the bottleneck

In my opinion, both camps are valid. We can envision the AI-camp as being more on the optimistic, futuristic side, and the cautious-camp as being more on the realism and traditional side. In an ideal culture, we want both. We need both to innovate, progress and evolve. We need energy, new ideas, and new technology, but we also need to be realistic and implement practical guardrails if we want to actually deliver anything.

In 2026 I predict an inevitable rise in the numbers of developers using AI in their workflows. This will be due to more and more models being released, that get better and better. Companies are also starting to really push the use of AI as they work through regulatory processes (and see how marketing the use of AI affects their share prices). It will become unavoidable as it becomes embedded in processes and workflows in the form of agents. For example, even if a developer chooses not to use AI to assist writing code, the code will likely be reviewed by an AI-agent as part of the CI process.

2. Return of the browser wars

Over the past couple of decades, we’ve had some epic browser wars, which often resolved with clear winners. In the late 90s, it was Netscape Navigator vs Internet Explorer (IE), with the eventual IE dominance. In the 2010s, after some competition for IE from Firefox, Chrome came along and decimated the browser market, and it still does now with an estimated 63% share according to Cloudflare Radar.

But the advent of AI changes everything. The browser wars are back.

I believe in 2026, the fundamental ways we browse and search the web will change, shaped by new habits driven by the integration of AI into our lives.

As an example right now when you Google something, you get an AI summary right at the top of the results. This signals a shift in how search engines and browsers will pivot. There is a new generation of browsers emerging. Who will win the war?

The most popular AI-integrated browsers at the moment are:

  • Google Chrome – Chrome now has “AI Mode”, and “Gemini in Chrome” is expected to be widely available soon.
  • Atlas – OpenAI’s new browser with ChatGPT built-in.
  • Comet – Perplexity’s browser with its AI built-in.
  • Dia – a browser by The Browser Company (recently aqcuired by Atlassian), which uses multiple AI models to provide AI features.
  • Claude for Chrome – Anthropic’s Chrome browser extension, adding Claude functionality to the browser.

Each of these browsers does essentially the same thing more or less. They provide an AI chat/assistant in a side-bar of sorts, so that you can interect with the AI while using the web page you’re viewing as context. This merges the two main way people find information – in a web search, and asking AI. There is no longer the need to flick between the two while doing research, nor the need to provide information on the web page as context for the AI, which is a huge productivity boost.

It will be interesting to see how each of these products tries to differentiate themselves, and who will come out on top. My bets are on ChatGPT Atlas as the name is the most recognisible to the masses (much like IE in the 2000s), and the Claude plugin for Chrome – mostly due to Claude setting up a bit of an ecosystem with Claude Code and the recently released Claude Cowork. I can see them gaining a lot of popularity in 2026.

AI browsers – what does this mean for web developers?

This means that the way people interact with our websites will begin to change. It means that now, not only humans will be viewing our sites, but AI agents as well, on an increasingly frequent basis.

How can we optimise our sites for AI agents? The same we we have always optimised our sites for external technology – by writing clean, structured, semantic code. Making our websites accessible, as we always have for assistive technology such as screen readers, has the added benefit of being easily read by agents. Who would have thought!

This means that agents visiting your site can effectively relay information to their users. While this may begin with simply retrieving information – before long I believe agents will be used to do things like online shopping, booking appointments and more, making it very important for businesses to have agent-traversible sites.

3. Truth will become murky

Somewhat worryingly – yet unsurprisingly, I’ve noticed that people are beginning to use AI-generated output as references or as a source of truth or reasoning. This is worrying for a few reasons:

  1. AI tends to tell you what you want to hear. What you get from an AI summary heavily depends on how you’ve written the prompt, and the relevant context you’ve given it. “You‘re absolutely right!” should sound familiar.

  2. Coherence is easily mistaken for truth. The coherence and confidence of AI outputs give us a feeling of truth, without questioning it or requiring proof.

  3. AI can and does get things wrong. As useful as AI summaries are, they can get things wrong. These mistakes can often go unnoticed due to the confident, reassuring language of the output.

“What it produces is not truth but a statistical echo of human choices about data and rules… The model predicts what sounds plausible instead of deciding what is true.”

AI’s Truth Problem

LLMs are essentially statistical models on steriods. Their output is based on what is plausable, not on what necessarily is truth. They can reproduce knowledge, but cannot reason why knowledge is true.

This output is then presented with such confidence and coherence, without any understanding or reasoning – just statistical plausability based on the large amounts of data the model has been trained on. And the thing is, coherence feels true. It gives us a certain confidence that we have the answers, that we are on the right track.

“LLMs create coherence without comprehension, and we’re learning to trust that coherence as truth.”

The Coherence Trap: How AI Is Teaching Us to Feel Truth

And I’d argue that the companies behind the models have a vested interest in making us feel this feeling. There are many capable models out there – ChatGPT, Gemini, Claude, and more, that can all do a similar job. So how do people choose one? They choose the one that makes them feel the best. This is their way to greater market share and the incentives that come with that.

It doesnt’t take much thought to see how this could quickly become a slippery slope, where LLMs recursively train themselves on their own AI-generated misinformation, with the goal of pleasing its users, rather than reliably informing them.

4. Authenticity will become a commodity

Now that the initial novelty of AI-generated content has worn off, people are starting to get AI fatigue. You can see it in the comments under almost every piece of AI-generated content or media. People are growing tired of the constant onslaught of AI slop pervasing their lives.

It is very easy to spot, and people are getting very bored of it. AI-generated media all has the same look-and-feel with no originality, and just looks lazy. And AI-generated text content all has the same templated output full of filler-words, and no significant underlying message.

Read more

People would much rather see media created by humans, who have poured effort, soul, expertise and personal flair into their work. This labour of love creates a meaning and connection between the work and the viewer, causing the viewers to value the work much more highly.

As a result of this AI fatigue, businesses and professionals who have sought to use generative AI to bolster their output in the past couple of years may begin to see diminishing returns in 2026. Continuing to aggressively post obviously AI-generated material could become damaging to their brand, as it will make them look like cheap, lazy, corner-cutters that favour quantity over quality. Along with reduced connection and engagement as a result of the AI fatigue, consumers will begin to associate these traits with the quality of the brand’s product and services.

Conversely, authentic, human-generated material will be far more effective at engaging with audiences. This material, created with hard work and love, will create meaningful connections and evoke more emotion from audiences, and especially stand-out in a sea of AI slop.

I’d like to close off this post with the following remark:

AI may enhance and supercharge us, but authenticity will be the signal that cuts through the noise.