The Power of Causality in Addressing the Unreliability of Current AI Systems

Ilya Sutskever, chief Scientist of OpenAI, was asked in an interview in March 2023 to speculate on the chief reason that the GPT approach may turn out not to succeed.  His answer: unreliability.

Today the US Senate held a hearing on Artificial Intelligence. But a key issue in the foundational limitation of AI was not directly addressed in the conversation. In confronting current risks from more complex and capable AI mechanisms, the issues of correlation and causation deserve to be brought into sharper focus.  The current state of the art in language modeling is largely based on finding statistical correlations in training data, not on understanding or modeling cause-and-effect relationships. 

“Correlation does not imply causation” emphasizes that a correlation between two variables does not necessarily indicate that one causes the other. This is a fundamental concept in scientific research and statistical analysis. As we spend more effort making our artificially intelligent systems safer, more explainable and interpretable, understanding causality is critical, as it enables better reasoning, generalization, and decision making.

Correlation is the statistical measure that describes the extent to which two variables fluctuate together, while causation refers to a cause-and-effect relationship where a change in one variable leads to a change in another. The difference is that correlation can suggest a relationship between two variables, but it does not indicate whether one variable’s change is the cause of the other’s change.

Filip Piękniewski, a researcher working on computer vision and AI states the problem strongly “This distinction between weak, statistical relationship and a lot stronger, mechanistic, direct, dynamical, causal relationship is really at the core of what in my mind is the fatal weakness in contemporary approach in AI.” https://blog.piekniewski.info/2023/04/09/ai-reflections/

Systems that use probabilistic correlation to determine patterns rely on identifying statistical relationships between different variables or events.  These models are primarily predictive in nature, and they don’t provide insights into why a certain outcome occurred or indeed if the relationship between the variables is actually one of dependence or there is merely a chance relationship. Correlation is weaker than direct cause-and-effect relationships.  

We at Decision-Zone argue that implementing AGI using a computer language with causal operators will be transformative. The DADA X platform is programmed with a special language with causal operators, Rapide. So it has the power to model complex interactions and track causality in a way that probabilistic models can’t.  This allows modeling to describe how different events lead to other events, and the ability to handle complex, intertwined systems where “how” something works- the order of events and the relationships between them- is important. 

Where the stakes can be extremely high, it’s essential to ensure that the software controlling devices or analyzing data is not just apparently correct, but truly correct.  In an event-based programming language such as Rapide, causal relationships can be represented by the sequence of events. When one event triggers another event, this can be seen as a cause-and-effect relationship. For example, an event ‘A’ might trigger an event ‘B’, which in turn triggers an event ‘C’. In this case, ‘A’ is a cause of ‘B’, and ‘B’ is a cause of ‘C’.  Rapide also uses branching temporal logic, which deals with the logical relationship between statements and time, to model causality. For example, a temporal logic statement might assert that whenever a certain condition is true at one point in time, another condition will be true at a later point in time. 

It has been suggested by Balderton that “there will have to be a mental shift to probabilistic workflows in which users will have to work with this uncertainty and develop methods to manage it.”  I find this idea troublesome at best.  Fortunately, this is not our only option as future autonomous agents with causal reasoning can reduce this uncertainty. 

By maintaining the causal relationships between events, DADA X (Decentralized Autonomous Decisioning Agent) enables the user to better understand and therefore manage, control and secure their complex systems.