In 1956, there was a workshop at Dartmouth College attended by scientists from industry and academia. Among those in attendance were Allen Newell and Herbert Simon of Carnegie Mellon University, John McCarthy and Marvin Minsky of MIT, and Arthur Samuel of IBM. These men became the founders and developers of a new field of research called artificial intelligence (AI).

AI, though a relatively modern concept, has its roots in the formalization of logic from prominent ancient cultures during humankind’s ascent to civilization. The philosophers and mathematicians of antiquity formed the backbone of rational thought and, over the centuries, mathematical and scientific proofs were based on logical reasoning.

As the work of mathematicians, logicians and scientists of each century was coupled with concurrent emerging technologies, a body of knowledge was slowly gathering and being passed on to the next century. Much of this amassing knowledge was economically driven, but it was all filtered, controlled and often hindered by quirky monarchs, patrician families, religious leaders and even natural disasters and diseases.

These trends continued into the 20th century when Alan Turing was born in 1912. Turing was an English mathematical prodigy who integrated the mathematics of logic with electromechanical devices. Through his ingenious and insightful work in using binary code to harness machines to do “thinking” tasks (e.g., mathematical operations), Turing became known as the father of computer science and AI. His work at Bletchley Park, England, during World War II resulted in specially developed machinery that broke Nazi communications codes and ciphers and cracked the theretofore elusive Enigma machine. No one can say for sure, but some claim that Turing’s work shortened the war in Europe by two years and saved millions of lives.

Though Turing’s life came to a sad and premature ending, the work he pioneered became the foundation of modern computing machines. As others picked up where Turing left off, machines were developed that could think about thought itself and adjust their logic functions based on the results of their decisions. In other words, they could “learn.” This brings us to a working definition of AI as “the simulation of human intelligence processes by machines including learning (the acquisition of information and rules for using the information), reasoning (using rules to reach approximate or definite conclusions) and self-correction.”

A few weeks ago, my imagination was captured by an announcement from Arkansas-based Big River Steel (BRS). It was about how they have partnered with Noodle.ai and EFT Analytics to develop AI capabilities in their operation that will connect every aspect of their steelmaking process “to create a Learning Mill that is integrated at a level that was previously thought impossible. Since we are constantly collecting data in real time, adding AI to the mix gives us the ability to be completely adaptive to the specific properties of each steel grade in the exact moment it’s being produced.” All I can say is “Bravo!” to the companies that are partnering in this endeavor.

Alan Turing said, “If a human could not distinguish between responses from a machine and a human, the machine could be considered ‘intelligent.’”

I don’t know if the system at Big River Steel can pass Turing’s test as stated, but I am cheering that BRS and its partners can turn artificial intelligence into genuine profits.