The state of artificial intelligence according to AI pioneer Randy Goebel

AJ Chisling
7 min readJan 5, 2024

Here’s our conversation with a renowned AI expert about the technology’s teenage years and beyond.

AI pioneer Randy Goebel is a professor in the Department of Computing Science at the University of Alberta, a founder and researcher with the Alberta Machine Intelligence Institute (AMII) and is involved with the development of the University of Alberta Google DeepMind relationship, the group behind AlphaGo. Goebel’s theoretical work on abduction, hypothetical reasoning and belief revision is internationally acclaimed and his recent application of practical belief revision and constraint programming to scheduling, layout, and web mining has had widespread impact across multiple industry verticals.

More recently, Goebel has been working on the application of machine learning to visual explanation and natural language processing, with focus on legal reasoning. He has previously held faculty appointments at the University of Waterloo and the University of Tokyo, and is actively involved in academic and industrial collaborative research projects in Canada, Australia, Malaysia, Europe and Japan. Goebel is on the advisory boards of the German Research Centre for AI, the Japan Science and Technology Organization and the Japanese National Institute for Informatics.

Here is our discussion about the state of AI.

Ava Chisling: As a pioneer in AI, where do you think we are on the grand scale of things? Are we in the technology’s infancy? Teen years? Older?

Randy Goebel: AI is beyond infancy because there are many AI foundations that already provide real social and economic value, but in highly selective or specialized areas. For example, credit card transaction patterns have been extracted and used to trigger fraud detection for decades, and most modern communication networks use automated diagnostic and repair architectures. Explicit use of AI provides voice recognition software like Siri, Alexa, Google Now, Cortana; recommender systems are so common that most don’t even recognize they are in place.

Given the incredible and often artificial enthusiasm for machine learning, especially deep learning, it is easy to anticipate AI as an emerging teenager: full of confidence, with no problem unsolvable, and in complete denial about self-awareness and stage of maturity.

AC: When you’re interested in areas like machine learning or algorithms and theory, what makes you go “hmmm, now that’s worth a look?” In other words, what piques your interest?

RG: As an emerging teenager, AI is in what some have called an empirical phase, where performance is what captures attention for most people, and for many technology people. For example, AlphaGo, image recognition and labelling, and natural language processing (both speech and text), have all provided relatively high-profile impact because their performance is easily understood.

But real AI innovation is much more elusive. It is hard to recognize, and even harder to create. Some have noted a spectrum of intelligent behaviour that only begins with performance, then requires (self) explanation, and perhaps, next, includes teachability. However, like all teenagers, history is irrelevant … somehow McCarthy’s “Advice Taker” paper of 1958 is unknown — or worse — dismissed.

AC: Has being in Canada helped you in your work? If so, how?

RG: When I was an M.Sc. student in Canada, my supervisor (Len Schubert, now at the University of Rochester) said you can do AI in Canada, but you have to work harder and be better than MIT and Stanford, otherwise you will get no acknowledgement. But largely due to Canadian AI leadership, Canada is now acknowledged as a world leader in AI, especially by leadership in the last two decades of the 20th century (1980–2000).

I was lucky to participate in the growth of both people and quality of that period, and now see the benefits for all current faculty and graduate students, as well as the increasingly positive impact on Canadian knowledge industries.

AC: As a new advisor to ROSS, what are you most excited about?

RG: I am most excited about ROSS providing leadership to deliver impact from accumulating and developing AI technology in a way that can provide measurable impact. AI scientists don’t always agree on the possible timeframe to impact of new AI ideas, but without a channel to demonstrate impact, there is no feedback on the anticipated impact. The ROSS leadership has exhibited what I would characterize as uncommon wisdom and stability about how to incrementally develop a market that will embrace impact at an acceptable rate.

AC: And speaking of ROSS, you know the founders well. What is it about these particular guys that you feel helps ROSS succeed?

RG: Genuine passion for what they are attempting, and uncommon discipline in staying the course.

AC: Are there applications to AI that we can’t yet imagine? Can you imagine one for us?

RG: I have always said that there are several kinds of AI complete problems that we have approximated by not yet completed. For example, some stories about robotic sheep shearing immobilized the sheep by passing a weak electrical current through it, causing sufficient tension to allow shearing without damage. Now consider trying to build a robotic system to re-diaper a baby, with the mother watching.

Another one was to be able to simply ask a search engine a question like “Give me 2–3 video clips of a politician contradicting him or herself, before and after an election.” Apparently this is much more difficult, because most humans have not figured it out, much less us AI scientists.

AC: What is juris-informatics?

RG: Ken Satoh’s invention of the term “juris-informatics” arises from an adjustment to “bio-informatics.” In general, the idea is to consider how all of computer science, artificial intelligence, and machine learning can provide a framework for automating the capture of and reasoning with legal or “jurisprudence” information.

AC: How does juris-informatics help advance the legal profession, in practical terms?

RG: Of immediate interest and practical benefit is the use of AI to uniformly speed up some aspects of legal proceedings, like the process of legal research. Legal research is a tedious and time consuming because it typically takes a lot of time, in both statue law and case law, to identify the appropriate documentation (statutes, cases) relevant to a particular legal situation. This practical use of AI, especially modern technologies for natural language processing (NLP), provide the basis for economic return for law firms which employ those technologies, and provide the basis for democratizing the law, sometimes referred to as “Access to Law.”

These immediate opportunities will be followed by increasingly sophisticated technologies, which not only exploit existing AI technologies, but help capture and exploit legal concepts like hypothetical reasoning, in order to provide support for the analysis and even synthesis of legal reasoning and legal argumentation.

“The best ultimate value of pervasive AI is to make the same information available to everyone, so that one can always identify, for example “fake news.” The world will be a more stable sustainable planet when we can achieve consensus based on fact.”

AC: Tell me about the significance of the AlphaGo project. What did that demonstration teach you — and us?

RG: AlphaGo, in the spirit of the current phase of the dominance of empirical methods in AI, provides a demonstration that we have sufficient AI insight and computing power to automate the enhanced performance of a restricted class of problems (full information games) beyond human capabilities.

AC: Can you give us a quick overview of how Meerkat went from initial idea to finished product?

Meerkat began as a demonstration of how clustering algorithms should be coupled with information visualization, to help support humans making inferences from alternative clustering methods. This coupling not only demonstrates the need for visual representations of large data sets compressed into clusters, but exposes the fact that no one visualization can support all possible kinds of inferences that humans would like to make about those large data sets.

AC: Can you tell me about something new the team is working on at the Alberta Machine Intelligence Institute (AMII)?

RG: The popularity of deep learning has exposed the challenge now labelled “Explainable AI” sometimes written as “XAI,” after an American funding agency’s (DARPA) new research program. The idea is to pursue the research required to transform deep learning models into those which can reveal explanations of their behaviour, for example, as demanded by applications in medicine and law.

We have an emerging project on what we call “Deep Visual Explanation,” which combines lessons learned on explanatory reasoning from abductive logic programming, and logic-based visualization.

AC: You have been working on the “application of machine learning to visual explanation and natural language processing, with focus on legal reasoning.” Can you explain what that is to a layperson?

RG: Humans are simply not good at managing large volumes of data, whether it is in the form of text, images, or signals in many different modalities. The identification of legal concepts extracted from large legal data sets, and their presentation to humans in visual terms can amplify the power of human reasoning. For example, selecting only several cases relevant to an emerging case and then visually presenting those case components most relevant allows human judgement to considering relationships that would otherwise never emerge.

AC: How has the application of practical belief revision and constraint programming to scheduling, layout, and web mining been applied to various businesses?

RG: The idea of the philosophical idea of “belief revision” is just about how to adjust an accumulating database of observations about the world. If you know “it is raining,” and then you see that it is no longer raining, you can adjust your beliefs about the current state of the weather. When facts are accumulated at a rate too quickly for a human to process (and thus become a bit muddled about whether you believe it is raining or not), then algorithmic support for managing a high volume stream of world observations is required.

In practical terms, if you have a fast-changing high volume stream of internet advertisements (like Google does), then you need automated systems to rapidly solve placement constraints (where do the ads appear), which is just a combination of geometric constraints, belief revision, and graphics design (visualization).

AC: As you often work in Japan, do you have any book recommendations for visitors to Tokyo? I recommend the Max Danger books (here).

RG: My favourite book that helped me understand Japan was Keita Genji’s “The Lucky One,” which is very hard to find these days. See here.

AC: What do you think is the biggest hurdle in reaching the potential of AI — particularly in solving real-world problems?

RG: A central theme of some public academic research has been the “democratization of knowledge.” Many literary themes, perhaps most obviously made popular by George Orwell’s slogan of “knowledge is power.” The best ultimate value of pervasive AI is to make the same information available to everyone, so that one can always identify, for example “fake news.” The world will be a more stable sustainable planet when we can achieve consensus based on fact.