How AI has been hijacked, the AGI fallacy and leveraging Vertical AI.

The term Artificial Intelligence has been with us since 1954, there have been three big surges in AI development, the first in the early 1970s when computer science tried to mimic the structure of the brain and neurons.  The second came around the 1990s with the arrival of machines like Deep Blue that beat Gary Casparov at chess, where advances were made through learning – real AI.  We are now in our third surge in AI, but is it all as it seems?

Artificial Intelligence in the 2020s. The inexorable rise of the current crop of ‘AI’ is a good thing right? Well yes, kind of. The big problem I have is that most companies that claim to have some sort of AI, actually have nothing of the sort.  Sure they have smart systems or systems that rely on fuzzy matching or data heuristsics.  But this isn’t AI, no matter how hard they try to convince you it is. I was recently speaking to someone from the world of marketing about companies he had worked with.  He openly stated that the last two companies claimed they had an AI system and the marketing material was stacked with the term ‘AI’.  But their systems really weren’t AI at all, they were just bandwagon jumping. This is where we are with the vast majority of AI right now. They’re not intelligent.  They’re not learning.  They are static systems that may have more data availalable over time for use in determining outcomes, but is that really AI?  In my view it absolutely isn’t. What About the Large Language Models? For me, this also extends, in part, to the world of Large Language Models (LLM) and Generative AI (GenAI) solutions such as ChatGPT.  Let me explain.  LLMs have basically indexed the internet, and in processing this data have created links between datasets based on fixed rulesets.  This means that similar information is grouped together, weighted and linked to other similar areas.  A big mesh of interconnected data.  This is not AI – this is data mining. Where they do have some AI is in the Natural Language Processing (NLP) systems, or whatever they want to call them, that sit on the front of them and receive prompts from the individuals or systems accessing them. ‘Write me an essay on glacial loss in Greenland’ for example. The system then goes away and pulls together an output based on the weighted indexes it has created in the data, that pools together information around glacial loss in Greenland. Sound’s intelligent right?  Well, no. The big problem with the internet is that there is so much false information out there, and the scraping engine that is grabbing and collating all this information has no idea what is accurate and what is false. Doesn’t sound so intelligent now does it? ChatGPT, and similar systems, are prone to what are being referred to as hallucinations.  That is creating falsehoods that appear to be correct. Notable hallucinations have included creating false legal precidents, as seen in a high profile court case in New York.  A couple of lawyers asked ChatGPT to create a brief for a court case they were defending.  This it did, including citations to other cases that would back up their point.  When these were presented in court, the preciding judge asked for a recess so he could find these citations and read them up.  It turned out they didn’t exist.  ChatGPT had made them up.  The judge sanctioned the lawyers, fined them and dismissed the case. 1 More recently Meta launched an AI Chatbot into a Mushroom Foragers group on Facebook, where it prompoty told one user to cook and eat a fungus that is highly toxic, and responsible for at least one death. Other instances for dangerous advice from chatbots are also included in the same referenced article. 2 The latest research on LLMs is showing that the greater the processing power of the LLM, and the data availble to it, the more likely it is to hallucinate.  OpenAI’s o1 model, their latest release as of this article, hallucinates more than its predecessors. 3 Another recent study, showed that legal AI models hallucinate for 1 in every 6 prompts submitted. 4

And herein is one of the great problems with the current raft of Large Language Models like ChatGPT, or Google’s Bard, or Meta’s LLaMa.

“What the Large Language Models are good at, is saying what an answer should sound like, which is different from what an answer should be”

Rodney Brooks.

Robotics and AI Pioneer, April 2023.

  Artificial General Intelligence Artificial General Intelligence is the ability of a machine to behave like a human in the way it thinks and expresses views and information. In 1970 Marvin Minsky, one of the greats of AI research stated that we would have AGI within six years.  That time came and passed.  At the beginning of November 20245, Sam Altman, the mercurial CEO of OpenAI, stated that we would have AGI within five years.  I truly believe that like Marvin Minsky’s prediction, that time will come and pass. Will OpenAI claim AGI? Almost definitely. Will it actually be AGI? Almost definitely not.  But they need it to ensure continued investment. There are just too many barriers.  AGI is not about having all the information in the world and being able to regurgitate it, or even reason logically to get the answer.  It is about how that response fits in a global, country or societal context. The exact definition of AGI is a hot topic at the moment, with people, normally from companies trying to achieve AGI, trying to propose different measurements by which they can state they have achieved it. For AGI to be achieved and be useful, the hallucinations needs to be erased.  However as per the comment above, it seems as these systems become more complex, the likelihood of hallucinations happening increases.  This also extends to the use of data that would reasonably fit in to acceptable use in society. Microsoft’s Tay chatbot is a case in point.  In 2016 Microsoft released an AI Twitter chatbot that was designed to create useful posts.  However with 16 hours of its launch it had been shutdown, never to be seen again.  Based on the information that it was scraping and interactions it was having, it started posting inflammatory and offensive tweets, several of which were racist and genocidal.6 The contextualising, and filtering of information is innate to us.  We understand our place in the world, we understand to a greater or lesser extent, how the world works and what society expects of us.  This is acquired through years, and sometimes decades of real world learning.  Our brains are still many, many magnitudes more complex than any artificial system currently around. The challenge is being able to program those decades of learning into a system.  And I really don’t see how this is practicable.  It’s is not about raw processing power, so the arrival of quantum computers will not make this happen.  This is about a programming team being able to develop the code that mimics these decades of learning, that is constantly evolving based on a global and societal changes. This won’t happen in the next 5 years, or even in my lifetime.  Also if the end result is the same each time a new instance of the system is switched on, then it’s not AGI.  How can it be?  You, as the reader of this article, will have differences in how you behave and respond, than I do.  This is down to differences in upbringing that are familial, educational, societal and national.  Intelligence is as much about the differences as the similarities.  And its certainly not about hallucinating all the time! Vertical AI Vertical AI (VAI) is the counterpoint of AGI. It is a tailored and optimised system that is specific to a particular industry problem or sector.  Current real-world examples include diagnostic tools that are picking up tumours months if not years before clinicians do.  Can it tell you about glacial loss in Greenland.  No.  But it can find tumours sooner than your doctor can.  And faster and more accurately too. Speed is a big part of this, especially when compared to the GenAI and LLM solutions.  Where something like ChatGPT will take 10 or 20 seconds or even longer to come up with an answer, VAIs do what they do ten, hundreds or even thousands of times quicker. Hurricane has two intelligent systems that power its global trade solutions, and mean that our clients can leverage systems that learn over time and get better at what they do.  These solutions are:
  • Zephyr Matching Engine (ZME) – the solution that underpins our HS Matching service.  This engine is a key part of Kona, our acknowledged world-leading all-in-one solution for Global Trade. ZME is constantly learning new matches and nuances in language.  When we started with ZME we were matching at around 82% (compare this to a trained customs broker at around 75-80%).  Today as I write this artical, our 3 month rolling averate rate is a 97.6% match rate.  ZME processes a request, does a qualitative analysis on the description, validates any HS code provided and returns both a suitable HS6 as well as 10+ digit import and export tariff codes in around 150ms!  That is literally in the blink of an eye.  And the serverless architecture we have built allows us to process 10,000 such calls a second on each of our global nodes.  We have three currently, with a fourth coming on line in 2025.  That’s 40,000 calls a second.  That’s a theoretical capacity of 103 billion calls in a 30 day month.
  • Bluestone AI – Bluestone is Hurricane’s true AI.  It’s a deep learning NLP that was built from the ground up, specific to our requirements.  It doesn’t piggy back on any pre-existing solutions like a lot of services do.  These all have compromises or limitations, including processing speed.  So we did what we always do, we wrote our own.  The engine behind Bluestone powers our Sanctioned Parties service in Kona.  Allowing us to provide a more nuanced response to screening for those people or companies that may be excluded from receiving shipments.  This allows our clients to build risk profiles as granular as they wish, even to the level of each trade lane.  As of the date of this article, I am redesigning Bluestone, to become not just a Hurricane product, but a solution that can use your data to bring VAI and all its benefits to your business. Bluestone v4 will be faster and more accurate, being able to learn both from structured examples, but also from direct feedback from you, and your customers if you want.
In Conclusion AI is the current buzzword in tech.  However the phrase itself is being diluted by companies trying to leverage themeselves into the space without the systems that are actually what they claim. It should be noted that the current level of power and cooling requirements to run the massive data centres required by the LLMs means that even the biggest of these companies is haemorrhaging cash at an unsustainable rate.  It is estimated as of the time of this article that ChatGPT costs around $7 billion a year to run, with around $4 billion of that going directly to Microsoft for running costs associated with its Azure Cloud platform. 7 They raised £5 billion / $6.6 billion in early October 20248 giving them a market capitalisation of $157 billion. There are still questions about their revenue model, and how it will support the cash burn rate. Particularly when you compare the level of the rasie to the running costs.  $7 billion costs and $5 billion losses, means current revenue is running at $2 billion at best.  Joe Public is not the solution, they have to look at big business and governments, but that question still remains; will it be enough?  The current AI gold rush is highly reminiscent of the original DotCom bubble, and we know how that ended. Beware, all is not that it seems. It’s like the Wild West in the world of AI at the monent.  Full of cowboys promising gold, but delivering iron pyrite, and the last thing you want is to be left looking like a fool(‘s gold). Stay safe out there. Ian.
Ian is CTO, Head of R&D and Co-Founder of Hurricane Commerce.  He has been designing and implementing self learning intelligent systems since 1985, across multiple industrial sectors.  Ian is a Member of the Institue of Physics (MInstP). Contact us to discuss more about how Hurricane’s intelligent systems along with it’s true AI create world-leading solutions for global trade.
References: 1 – New York lawyers sanctioned for using fake ChatGPT cases in legal brief – Reuters, June 26 2023 2 – AI Chatbot Joins Mushroom Hunters Group, Encourages Dangerous Mushroom – Gizmodo, 13 Nov, 2024 3 – OpenAI’s o1 Model is a Hallucination Train Wreck – Cubed, 13 Sept, 2024 4 – AI on Trial: Legal Models Hallucinate in 1 out of 6 – Stanford University HAI – 23 May, 2024. 5 – OpenAI CEO Sam Altman says AGI would have ‘whooshed by’ in 5 years – MSN, November 5 2024. 6 – Microsoft scrambles to limit PR damage over abusive AI bot Tay – The Guardian, 24 March 2016. 7 – The Cost of Running ChatGPT Is Insanely High! OpenAI May Lose $5 Billion This Year – AI Base, 26 July 2024 8 – OpenAI raises £5 billion in largest ever funding round – Yahoo Finance, 3 October 2024
Scroll to Top

David SpoTtiswood
Co-founder

Interesting Fact: I am an amateur baker, but I still have no idea how sourduogh starter actually works, and am intrigued how it all reacts together to produce an incredible taste.

Favourite Music: November Rain by Guns ‘n’ Roses.  Going to Wembley with my wife our go to fun thing in our early years, our youth with long hair and rock clothing and not a care in the world other than getting the best spot in the house.

Favourite Quote: “Insanity is doing the same thing over and over again, and expecting different results ” – Albert Einstein

Harry Reilly
Non-exec

Interesting Fact:  I learned Arabic for five years!.

Favourite Music:  A Long December by Counting Crows.  Memory of best family time together in California.

Favourite Quote: “Don’t forget execution, boys. It’s the all-important last 95%”

Tom Lee

Technical Director

Interesting Fact:  I am completely self-taught from a technical skills persepctiuve, and left formal education at 18.

Favourite Music:  Blink 182 – Aliens Exist.  Brings back fond memories of stickly floors and cheap beer.

Favourite Quote: “He sprayed water in my face – thta’s not allowed” – James Haskell.  The whole event surrounding it is hilarious and shows the power of a good wind up

Martin Palmer
Co-Founder

Interesting Fact: I started my working life training to be an accountant but decided I hated numbers. (Ironically I now love them!). I really wanted to join the Hong Kong Police force but couldn’t do that until i was 24. I took a temporary job in Imports and 47 years later here I am.

Favourite Music: There only was one choice. Harry Chapin. Meant a lot to me in my early years as an import broker. We played Chapin for hours and this one seemed to cover so many modes.

Favourite Quote: “No man is an island”

Neil Harmer

Operations Director

Interesting Fact:  As a Geologist my idea of the perfect beach holiday is going to the beach and investigating the rocks in the cliffs behind.

Favourite Music:  Broken Stones – Paul Weller, I’m a huge Paul Weller / The Jam fan; Broken Stones is a very relaxing song, I love the use of the electric piano in it

Favourite Quote: “Don’t put off until tomorrow what you can do today”. This is a great quote by Benjamin Franklin, to have in your head when working through a series of tasks to help keep focused.

Robert Dundas
Sales Director

Interesting Fact:  One of my life goals is to be able to speak French, I’ve been doing Duolingo every day for the last five years, and I’m still rubbish! 

Favourite Music: Where do I even start! Tom Petty Running Down a Dream, this is my top-down driving next to the ocean song

Favourite Quote: “This time will pass”

ASHLEY DEXTER
CFO & Co-founder

Interesting Fact: I was nearly named Battle Dexter (I would have probably now been residing in one of His Majesty’s establishments)

Favourite Music: Even though I spent a few years in the music industry my taste of music was always a cause for concern with my colleagues, so to surprise them all my current favourite is Kids by MGMT (absolute belter)

Favourite Quote: “Quitters never win and winners never quit”

Ian Venner
CTO & Co-founder

Interesting Fact: Runs Red Lantern Records, a not-for-profit, ethical label as a side project, whose artists have regular national BBC radio airplay.

Favourite Music: Tom Waits, pretty much all of his work.  Beautifully observed avante-garde vignettes of life.  Oh, and anything really loud!

Favourite Quote: “It’s not the mountains we climb, but the grit in our shoe that grinds us down” – which sums up taking a business from start-up to enterprise.

Martyn Noble
CEO & Co-founder

Interesting Fact: Played a high standard of semi-professional rugby union (too many years ago now!)

Favourite Music: Led Zeppelin – Stairway to Heaven…my first live gig – Knebworth 11th August 1979, the track never grows old and is the iconic song of ‘hope’ whatever mood you are in when listening too it…and I’m still trying to work out what the lyrics mean!!

Favourite Quote: “Know your customers, Know your People, Know your Numbers” – plagiarised from Sir John Harvey Jones when I met him very early on in my career and values I stick to in my business life.