We’ve been here before. Claims that AI is over-hyped are as old as AI. Sometimes, it can get silly. Those are not my words; rather, Mitch Waldrop, a celebrated technology writer who authored various articles and books on AI in the 1980s and 1900s, said there had been a “combination of silliness and seriousness.”
Why is that significant? He was describing the debates he had heard at an AAAI conference in 1984 at a seminar entitled: “The Dark Ages of AI.” It was at this same conference where Roger Schank, and Marvin Minsky supposedly came up with the phrase “AI Winter.”
Evidence that they actually said this seems sketchy; many articles cite each other, but the above article, which cites Mitch Waldrop, is the only first-hand account I could find.
But maybe it doesn’t matter who said what: it is clear that AI has been through hoops and suffered what people refer to as an “AI Winter” on many occasions.
Many who doubt AI today talk about how it has exploded on the scene, but in reality, I would describe the ascendence of AI as more akin to a slow burn.
If Wikipedia is any guide, you can trace-back AI disappointment to the 1960s.
The AI media blitz of the last two years was about as surprising as the news of a sunrise this morning. The only unpredictable element was the timing — no one predicted the date ChatGPT would hit the headlines, but predictions that something of that nature would happen at some point were commonplace.
But the year date Wikipedia refers to as year-one when describing AI winters, is significant. It says 1966 saw the “failure of machine translation.” But 1966 was an interesting year — it stood betwixt the year when a certain Gordon Moore noticed that the number of transistors on an integrated circuit doubles every two years — giving rise to Moore’s Law and the Summer of Love of 1967 when the Hippie movement spurred a cultural revolution.
Of course, there was AI disappointment in the 1960s, and indeed the 1970s, 1980s and the next decade. Assuming Moore’s Law is accurate and the number of integrated circuits doubled every two years during the half a century following Moore’s observation, the typical computer of 2014 was roughly a million times more powerful than the equivalent of 1965.
As with so many new technologies, its progress is slow until the conditions are right, then it changes the world.
This excellent article traces the story of steam power and subsequent technologies of that ilk. It states: “The first practical steam engine was built by Thomas Newcomen in 1712. It was used to pump water out of mines,” it states. "An astute observer might have looked at this and said: ‘It’s clear where this is going. The engine will power everything: factories, ships, carriages. Horses will become obsolete!’ This person would have been right—but they might have been surprised to find, two hundred years later, that we were still using horses to plough fields.”
AI will change the world — there is no doubt about that. But will it take another 200 hundred years?
Answer: no, it won’t.
For one thing, it is already old — how far back must we trace its origin? Maybe we need to go back to the work of Babbage and Ada Lovelace whose work on computers dates to the mid-1800s.
Is AI over-hyped? Of course, it is, in much the sae way horseless carriages were overhyped, or indeed the internet. They still changed the world, though.
Moore’s Law used to be my favourite Law of technology, but these days I prefer Amara’s Law — We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”
As for AI, the only question regarding its importance that really matters is where it sits on its hype cycle — from a germ of an idea, hype, bubble, crash, reality and changing the world.
And that takes us to the here and now and the practical applications of AI.
What is the single biggest threat facing humanity? Well, there is more than one contender. Perhaps you can lump all the challenges under one generic heading: “collective stupidity.”
But AI can be a tool to help tackle two of the problems. First, there is fake news, misinformation, echo chambers and the tendency of groups to believe what they want to believe. At least AI can fact-check and throw into the caldron of ideas things called facts.
Secondly, there is climate change. Sure, AI has a high carbon footprint, but then again, server farms, data centres and neural networks may be huge energy guzzlers, but that is not the same thing as being huge emitters of carbon dioxide.
But let’s be more specific — what about AI supporting ESG and ESG Reporting?
In the next episode of the ESG Show, we have three experts — but they have different opinions.
Robin Boustead, author of the soon to be published ‘ESG Reporting Manual,” is an AI cynic — he says it is over-hyped and in the specific area of ESG Reporting a bit like using a huge hammer to crack a nut. “I think AI has the potential to solve some really complicated problems,” he says, “but right now, the problems we face with ESG are not that complicated or that difficult to unravel.”
Don Kasper, the Co-Founder and SVP of Innovation at Liminal Data, has a quite different take — the company provides an AI-driven sustainability platform that delivers actionable insights and automates ESG data collection.
Sarah Burnett FBCS — an old friend of Techopian — is a renowned industry analyst, advisor, and author specializing in intelligent automation. Her acclaimed book, “The Autonomous Enterprise – Powered by AI,” was published by BCS, the British Chartered Institute for IT, in 2022.
So it should be a fascinating discussion.
Join us on Wednesday, 30th October at 7 am ET, 1 pm BST, 2 pm CET and 5.30 pm IST.
Related News
ESG and the CEO
Jul 30, 2024
Navigating the ESG backlash
May 08, 2024
Something rather important is lying hidden, and one of our guests this week knows how to find it.
Apr 23, 2024