The technology buzz over the last year has been mainly about AI. The explosive emergence of Large Language Models (ChatGPT, Bard etc) has brought AI into the mainstream. That’s often the way; a strand of technology innovation bubbles along, developing at pace but broadly unnoticed in the background until, seemingly overnight, it emerges in a consumer application. And then whoosh, it takes off like a rocket.

 

But AI has not just happened overnight, it’s been developing, morphing and evolving over 80 years.  We’ve compiled a timeline of how it’s developed, with some of the key milestones.

 

At Carbon Re, we use various kinds of machine learning and deep learning, specifically deployed in the ‘foundation’ industries such as cement manufacture. Our core expertise is ‘reinforcement learning’: an advanced form of Artificial Intelligence where an AI agent learns through repeated play in a virtual environment to optimise a reward function. It outperforms ‘supervised learning’ (90% of all AI machine learning applications) such as ‘Model Predictive Control’ used by other ‘AI’ control systems being developed for cement production. 

 

Reinforcement learning is one of the most promising branches of AI, able to work with very complex systems and solve problems requiring sophisticated strategies.

 

Our team undertakes continuous R&D with the highest level of scientific rigour and very strong connections to the top AI researchers and research labs. We are uniquely positioned to not only translate the latest AI research into value-creating applications for the cement and building industry, but also to advance the state-of-the-art of what AI can achieve in the industry with our in-house research expertise.

 

We’re constantly striving to build on the breakthrough thinking that has gone before us and move things forward.

Early Developments

 

1943
  • The first work that is now recognized as AI was done by Warren McCulloch and Walter Pits, who proposed a model of artificial neurons.

 

1949
  • Donald Hebb demonstrated an updating rule for modifying the connection strength between neurons. His rule is now called Hebbian learning.

 

1950
  • In his book “Computing Machinery and Intelligence”, Alan Turing proposed the “Turing Test” to assess a machine’s ability to exhibit intelligent behaviour equivalent to human intelligence.

 

1955
  • Allen Newell and Herbert A. Simon created the “first artificial intelligence program”, named “Logic Theorist”, which proved 38 of 52 Mathematics theorems, and found new and more elegant proofs for some theorems.

 

1956
  • The term artificial intelligence (AI) was first coined by American computer scientist John McCarthy at the Dartmouth Summer Research Project on Artificial Intelligence, which brought together experts from a range of disciplines and where Logic Theorist was presented.

 

1959
  • Arthur Samuel, a computer scientist who developed a program to play checkers, coined the term “machine learning” when speaking about programming a computer to play a game of chess better than the human who wrote its program.
  • Although the first generation of AI researchers predicted that a computer would be world chess champion within a decade and within two decades machines would be able to do “any work a man can do”, progress was held back by the limitations of computers – until 1959, they were unable to remember commands, and computers were still massively expensive. 
  • As computers became faster, cheaper, more accessible, and able to store more information, progress continued. Machine learning algorithms also became better able to solve problems and interpret spoken language. DARPA, the US Defense Advanced Research Projects Agency, funded AI research at a number of institutions. “The government was particularly interested in a machine that could transcribe and translate spoken language as well as high throughput data processing,” according to Harvard University. 

 

1961
  • Unimate became the first robot to work on a General Motors assembly line in New Jersey, transporting die castings from the assembly line and welding parts onto cars.

 

1966
  • ELIZA, the first chatbot was developed.

 

1972
  • WABOT-1, the first “intelligent” humanoid robot was built in Japan.

 

1974-1980
  • Although research continued, there was a period of reduced funding and interest that some called the “AI winter”. This coincided with the aftermath of the 1973 oil crisis, when OPEC’s oil embargo caused prices to rise almost 300% and costs were cut across the economy.  
  • However, in the 1980s, as computing power and new algorithms developed, AI research saw a resurgence. John Hopfield and David Rumelhart developed “deep learning” techniques that enabled computers to learn from experience, while Edward Feigenbaum set out the concept of expert systems, which mimicked human processes of expert decision-making. 
  • Japan’s Fifth Generation Computer Project (FGCP) invested $400 million in AI from 1982-1990, but was seen as not achieving its goals. Its demise marked another pause in AI funding, but out of the spotlight and with limited government funding, AI started to make real progress and a few key landmarks were achieved. 
1980
  • The first national conference of the American Association of Artificial Intelligence was held at Stanford University.

 

1986
  • Mercedes-Benz built a driverless van equipped with cameras and sensors that could drive up to 55 mph on a road with no other obstacles nor human drivers.

 

1988
  • Jabberwacky, a chatbot designed to “simulate natural human chat in an interesting, entertaining and humorous manner” is released. This is an example of AI via a chatbot communicating with people.

 

1995
  • Computer scientist Richard Wallace develops A.L.I.C.E (Artificial Linguistic Internet Computer Entity), a chatbot inspired by ELIZA but with added natural language sample data collection.

 

1997
  • IBM’s Deep Blue defeated chess champion Gary Kasparov. 
  • Microsoft installed speech recognition software on Windows. 
  • Kismet, a robot that could recognise and display human emotions, was launched.

 

2002
  • Roomba, a robot vacuum cleaner, sees AI enter the home for the first time.

 

2004
  • NASA’s robotic exploration rovers Spirit and Opportunity navigate Mars’ surface without human intervention.

 

2006
  • AI enters the business world, with companies such as Facebook, Twitter and Netflix starting to use it.

 

2010
  • ImageNet launches the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), an annual AI object recognition competition.
  • Microsoft launches Kinect for Xbox 360, the first gaming device that tracked human body movement using a 3D camera and infrared detection.

 

2011
  • Watson, another IBM computer, wins the US quiz show Jeopardy, demonstrating the ability to understand solve complex questions and riddles rapidly.
  • Apple releases Siri, its virtual assistant, which can “infer, observe, answer, and recommend things to its human user”, adapt to voice commands and project an “individualized experience” for each user.

 

2012
  • Google launches Google Now, which uses AI to provide information.

 

2013
  • A research team from Carnegie Mellon University released Never Ending Image Learner (NEIL), a semantic machine learning system that could compare and analyze image relationships.

 

2014
  • Microsoft released Cortana, its version of Siri and Amazon released Alexa, a virtual assistant embedded in a smart speaker. Google Home would follow in 2016. 

 

2015-2017
  • Google DeepMind’s AlphaGo, defeats a range of human champions of the board game Go.

 

2020
  • Baidu releases LinearFold, an AI algorithm, to medical and scientific and medical teams developing a vaccine during the early stages of the COVID-19 pandemic. The algorithm can predict the RNA sequence of the virus in only 27 seconds, which is 120 times faster than other methods.
  • OpenAI released GPT-3, a large language model that was capable of generating human-quality text in June 2020, unleashing a wave of further LLM announcements.

 

2022
  • Google AI released LaMDA, a language model to compete with ChatGPT that could answer your questions in an informative way, even if they were open ended, challenging, or strange.
2023
  • By the time Chat GPT-4 was released in March 2023, the world was going into meltdown over the potential (for good and bad) of LLM and generative AI.
  • More significantly for our long-term ability to affect the structures of life itself, DeepMind has this year released AlphaFold 2, a protein folding model that could predict the structure of proteins with unprecedented accuracy.  Just one more example of unprecedented innovation in the AI field.

 

Moving forward

For the rest of this year, most attention will remain on the more visible iterations of AI such as LLMs, with regulators racing to catch up with the advances in technology to ensure responsible development of AI.  However, AI is starting to be deployed across more and more applications – many of them invisible to the public at large – and this is only likely to accelerate.