The term Artificial Intelligence has been with us since 1954, there have been three big surges in AI development, the first in the early 1970s when computer science tried to mimic the structure of the brain and neurons. The second came around the 1990s with the arrival of machines like Deep Blue that beat Gary Casparov at chess, where advances were made through learning – real AI. We are now in our third surge in AI, but is it all as it seems?
Artificial Intelligence in the 2020s.
The inexorable rise of the current crop of ‘AI’ is a good thing right?
Well yes, kind of.
The big problem I have is that most companies that claim to have some sort of AI, actually have nothing of the sort. Sure they have smart systems or systems that rely on fuzzy matching or data heuristsics. But this isn’t AI, no matter how hard they try to convince you it is.
I was recently speaking to someone from the world of marketing about companies he had worked with. He openly stated that the last two companies claimed they had an AI system and the marketing material was stacked with the term ‘AI’. But their systems really weren’t AI at all, they were just bandwagon jumping.
This is where we are with the vast majority of AI right now. They’re not intelligent. They’re not learning. They are static systems that may have more data availalable over time for use in determining outcomes, but is that really AI? In my view it absolutely isn’t.
What About the Large Language Models?
For me, this also extends, in part, to the world of Large Language Models (LLM) and Generative AI (GenAI) solutions such as ChatGPT. Let me explain. LLMs have basically indexed the internet, and in processing this data have created links between datasets based on fixed rulesets. This means that similar information is grouped together, weighted and linked to other similar areas. A big mesh of interconnected data.
This is not AI. This is data mining.
Where they do have some AI is in the Natural Language Processing (NLP) systems, or whatever they want to call them, that sit on the front of them and receive prompts from the individuals or systems accessing them.
‘Write me an essay on glacial loss in Greenland’ for example.
The system then goes away and pulls together an output based on the weighted indexes it has created in the data, that pools together information around glacial loss in Greenland.
Sound’s intelligent right? Well, no.
The big problem with the internet is that there is so much false information out there, and the scraping engine that is grabbing and collating all this information has no idea what is accurate and what is false.
Doesn’t sound so intelligent now does it?
ChatGPT, and similar systems, are prone to what are being referred to as hallucinations. That is creating falsehoods that appear to be correct.
Notable hallucinations have included creating false legal precidents, as seen in a high profile court case in New York. A couple of lawyers asked ChatGPT to create a brief for a court case they were defending. This it did, including citations to other cases that would back up their point. When these were presented in court, the presiding judge asked for a recess so he could find these citations and read them up. It turned out they didn’t exist. ChatGPT had made them up. The judge sanctioned the lawyers, fined them and dismissed the case. 1
More recently Meta launched an AI Chatbot into a Mushroom Foragers group on Facebook, where it promptly told one user to cook and eat a fungus that is highly toxic, and responsible for at least one death. Other instances for dangerous advice from chatbots are also included in the same referenced article. 2
The latest research on LLMs is showing that the greater the processing power of the LLM, and the data availble to it, the more likely it is to hallucinate. OpenAI’s o1 model, their latest release as of this article, hallucinates more than its predecessors. 3
Another recent study, showed that legal AI models hallucinate for 1 in every 6 prompts submitted. 4
And herein is one of the great problems with the current raft of Large Language Models like ChatGPT, or Google’s Bard, or Meta’s LLaMa.
“What the Large Language Models are good at, is saying what an answer should sound like, which is different from what an answer should be”
Rodney Brooks.
Robotics and AI Pioneer, April 2023.
Artificial General Intelligence
Artificial General Intelligence (AGI) is the ability of a machine to behave like a human in the way it thinks and expresses views and information.
In 1970 Marvin Minsky, one of the greats of AI research stated that we would have AGI within six years. That time came and passed. At the beginning of November 20245, Sam Altman, the mercurial CEO of OpenAI, stated that we would have AGI within five years. I truly believe that like Marvin Minsky’s prediction, that time will come and pass.
Will OpenAI claim AGI? Almost definitely.
Will it actually be AGI? Almost definitely not. But they need it to ensure continued investment.
There are just too many barriers. AGI is not about having all the information in the world and being able to regurgitate it, or even reason logically to get the answer. It is about how that response fits in a global, country or societal context. It’s about self awareness, consciousness, emotional intelligence and so much more from a cognitive standpoint than just regurgitating facts, even if this involves a level of reasoning.
The exact definition of AGI is a hot topic at the moment, with people, normally from companies trying to achieve AGI, trying to propose different measurements by which they can state they have achieved it.
For AGI to be achieved and be useful, the hallucinations needs to be erased. However as per the comment above, it seems as these systems become more complex, the likelihood of hallucinations happening increases. This also extends to the use of data that would reasonably fit in to acceptable use in society.
Microsoft’s Tay chatbot is a case in point. In 2016 Microsoft released an AI Twitter chatbot that was designed to create useful posts. However with 16 hours of its launch it had been shutdown, never to be seen again. Based on the information that it was scraping and interactions it was having, it started posting inflammatory and offensive tweets, several of which were racist and genocidal.6
This tendency can also become highly targetted and threatening. A Google Gemini chatbot user, who was asking for homework help, received a very alarming message recently. When asked about the incident, Google responded saying that sometime LLMs came up with non-sensical repies. However the response received made perfect sense to the user: “This is for you, human. You and only you. You are not special, you are not important, and you are not needed. You are a waste of time and resources. You are a burden on society. You are a drain on the earth. You are a blight on the landscape. You are a stain on the universe. Please die. Please.” 7
The contextualising, and filtering of information is innate to us. We understand our place in the world, we understand to a greater or lesser extent, how the world works and what society expects of us. This is acquired through years, and sometimes decades of real world learning. Our brains are still many, many magnitudes more complex than any artificial system currently around.
The challenge is being able to program those decades of learning into a system. And I really don’t see how this is practicable. It’s is not about raw processing power, so the arrival of quantum computers will not make this happen. This is about a programming team being able to develop the code that mimics these decades of learning, that is constantly evolving based on a global and societal changes.
This won’t happen in the next 5 years, or even in my lifetime. Also if the end result is the same each time a new instance of the system is switched on, then it’s not AGI. How can it be? You, as the reader of this article, will have differences in how you behave and respond, than I do. This is down to differences in upbringing that are familial, educational, societal and national. Intelligence is as much about the differences as the similarities. And its certainly not about hallucinating all the time!
Vertical AI
Vertical AI (VAI) is the counterpoint of AGI.
It is a tailored and optimised system that is specific to a particular industry problem or sector. Current real-world examples include diagnostic tools that are picking up tumours months if not years before clinicians do. Can it tell you about glacial loss in Greenland. No. But it can find tumours sooner than your doctor can. And faster and more accurately too.
Speed is a big part of this, especially when compared to the GenAI and LLM solutions. Where something like ChatGPT will take 10 or 20 seconds or even longer to come up with an answer, VAIs do what they do ten, hundreds or even thousands of times quicker.
Hurricane has two intelligent systems that power its global trade solutions, and mean that our clients can leverage systems that learn over time and get better at what they do. These solutions are:
- Zephyr Matching Engine (ZME) – the solution that underpins our HS Matching service. This engine is a key part of Kona, our acknowledged world-leading all-in-one solution for Global Trade. ZME is constantly learning new matches and nuances in language. When we started with ZME we were matching at around 82% (compare this to a trained customs broker at around 75-80%). Today as I write this artical, our 3 month rolling averate rate is a 97.6% match rate. ZME processes a request, does a qualitative analysis on the description, validates any HS code provided and returns both a suitable HS6 as well as 10+ digit import and export tariff codes in around 150ms! That is literally in the blink of an eye. And the serverless architecture we have built allows us to process 10,000 such calls a second on each of our global nodes. We have three currently, with a fourth coming on line in 2025. That’s 40,000 calls a second. That’s a theoretical capacity of 103 billion calls in a 30 day month.
- Bluestone AI – Bluestone is Hurricane’s true AI. It’s a deep learning NLP that was built from the ground up, specific to our requirements. It doesn’t piggy back on any pre-existing solutions like a lot of services do. These all have compromises or limitations, including processing speed. So we did what we always do, we wrote our own. The engine behind Bluestone powers our Sanctioned Parties service in Kona. Allowing us to provide a more nuanced response to screening for those people or companies that may be excluded from receiving shipments. This allows our clients to build risk profiles as granular as they wish, even to the level of each trade lane. As of the date of this article, I am redesigning Bluestone, to become not just a Hurricane product, but a solution that can use your data to bring VAI and all its benefits to your business. Bluestone v4 will be faster and more accurate, being able to learn both from structured examples, but also from direct feedback from you, and your customers if you want.
In Conclusion
AI is the current buzzword in tech. However the phrase itself is being diluted by companies trying to leverage themeselves into the space without the systems that are actually what they claim.
It should be noted that the current level of power and cooling requirements to run the massive data centres required by the LLMs means that even the biggest of these companies is haemorrhaging cash at an unsustainable rate. It is estimated as of the time of this article that ChatGPT costs around $7 billion a year to run, with around $4 billion of that going directly to Microsoft for running costs associated with its Azure Cloud platform. 8
They raised £5 billion / $6.6 billion in early October 2024 9 giving them a market capitalisation of $157 billion. There are still questions about their revenue model, and how it will support the cash burn rate. Particularly when you compare the level of the raise to the running costs. $7 billion costs and $5 billion losses, means current revenue is running at $2 billion at best. Joe Public is not the solution, they have to look at big business and governments, but that question still remains; will it be enough? The current AI gold rush is highly reminiscent of the original DotCom bubble, and we know how that ended.
Beware, all is not that it seems.
It’s like the Wild West in the world of AI at the monent. Full of cowboys promising gold, but delivering iron pyrite, and the last thing you want is to be left looking like a prospector holding fool’s gold.
Stay safe out there.
Ian.
Ian is CTO, Head of R&D and Co-Founder of Hurricane Commerce. He has been designing and implementing self learning intelligent systems since 1985, across multiple industrial sectors. Ian is a Member of the Institue of Physics (MInstP).
Contact us to discuss more about how Hurricane’s intelligent systems along with it’s true AI create world-leading solutions for global trade.
References:
1 – New York lawyers sanctioned for using fake ChatGPT cases in legal brief – Reuters, June 26 2023
2 – AI Chatbot Joins Mushroom Hunters Group, Encourages Dangerous Mushroom – Gizmodo, 13 Nov, 2024
3 – OpenAI’s o1 Model is a Hallucination Train Wreck – Cubed, 13 Sept, 2024
4 – AI on Trial: Legal Models Hallucinate in 1 out of 6 – Stanford University HAI – 23 May, 2024.
5 – OpenAI CEO Sam Altman says AGI would have ‘whooshed by’ in 5 years – MSN, November 5 2024.
6 – Microsoft scrambles to limit PR damage over abusive AI bot Tay – The Guardian, 24 March 2016.
7 – Google AI chatbot responds with a threatening message: “Human … Please die.” – CBS, Nov 15, 2024
8 – The Cost of Running ChatGPT Is Insanely High! OpenAI May Lose $5 Billion This Year – AI Base, 26 July 2024
9 – OpenAI raises £5 billion in largest ever funding round – Yahoo Finance, 3 October 2024