Cognitive AIOur earliest models of reality were expressed as static structures and geometry, until mathematicians of the 16th century came up with differential algebra, a framework which allowed us to capture aspects of the world as a dynamical system. The 20th century introduced the concept of computation, and we began to model the world through state transitions. Stephen Wolfram suggests that we may be about to enter a new paradigm: multicomputation. At the core of multicomputation is the non-deterministic Turing machine, one of the more arcane ideas of 20th century computer science. Unlike a deterministic Turing machine, it does not just transition from one state to the next, but to all possible states simultaneously, resulting in structures that emerge over the branching and merging of causal paths.
Stephen Wolfram studies the resulting multiway systems as a model for foundational physics. Multiway systems can also be used as an abstraction to understand biological and social processes, economic dynamics, and model-building itself.
In this conversation, we want to explore whether mental processes can be understood as multiway systems, and what the multicomputational perspective might imply for memory, perception, decision making and consciousness.
About the Guest: Stephen Wolfram is one of the most interesting and least boring thinkers of our time, well known for his unique contributions to computer science, theoretical physics and the philosophy of computation. Among other things, Stephen is the creator of the Wolfram Language (also known as Mathematica), the knowledge engine Wolfram|Alpha, the author of the books A New Kind of Science and A Project to Find the Fundamental Theory of Physics, and the founder and CEO of Wolfram Research.
We anticipate that this will be an intellectually fascinating discussion; please consider reading some of the following articles ahead of time:
Multiway Systems as Models to Understand Mind and Universe - a Conversation with Stephen WolframCognitive AI2022-04-13 | Our earliest models of reality were expressed as static structures and geometry, until mathematicians of the 16th century came up with differential algebra, a framework which allowed us to capture aspects of the world as a dynamical system. The 20th century introduced the concept of computation, and we began to model the world through state transitions. Stephen Wolfram suggests that we may be about to enter a new paradigm: multicomputation. At the core of multicomputation is the non-deterministic Turing machine, one of the more arcane ideas of 20th century computer science. Unlike a deterministic Turing machine, it does not just transition from one state to the next, but to all possible states simultaneously, resulting in structures that emerge over the branching and merging of causal paths.
Stephen Wolfram studies the resulting multiway systems as a model for foundational physics. Multiway systems can also be used as an abstraction to understand biological and social processes, economic dynamics, and model-building itself.
In this conversation, we want to explore whether mental processes can be understood as multiway systems, and what the multicomputational perspective might imply for memory, perception, decision making and consciousness.
About the Guest: Stephen Wolfram is one of the most interesting and least boring thinkers of our time, well known for his unique contributions to computer science, theoretical physics and the philosophy of computation. Among other things, Stephen is the creator of the Wolfram Language (also known as Mathematica), the knowledge engine Wolfram|Alpha, the author of the books A New Kind of Science and A Project to Find the Fundamental Theory of Physics, and the founder and CEO of Wolfram Research.
We anticipate that this will be an intellectually fascinating discussion; please consider reading some of the following articles ahead of time:
Chat log for the recording: bit.ly/3xqe8tBGeneralist AI beyond Deep LearningCognitive AI2023-01-11 | Generative AI represents a big breakthrough towards models that can make sense of the world by dreaming up visual, textual and conceptual representations, and are becoming increasingly generalist. While these AI systems are currently based on scaling up deep learning algorithms with massive amounts of data and compute, biological systems seem to be able to make sense of the world using far less resources. This phenomenon of efficient intelligent self-organization still eludes AI research, creating an exciting new frontier for the next wave of developments in the field. Our panelists will explore the potential of incorporating principles of intelligent self-organization from biology and cybernetics into technical systems as a way to move closer to general intelligence. Join in on this exciting discussion about the future of AI and how we can move beyond traditional approaches like deep learning!
This event is hosted and sponsored by Intel Labs as part of the Cognitive AI series.VL-InterpreT: An Interactive Visualization Tool for Interpreting Vision-Language TransformersCognitive AI2022-04-06 | VL-InterpreT was accepted to CVPR 2022.
VL-InterpreT provides novel interactive visualizations for interpreting the attention and hidden representations in multimodal transformers. It is a task agnostic and integrated tool that (1) tracks a variety of statistics in attention heads throughout all layers for both vision and language components, (2) visualizes cross-modal and intra-modal attentions through easily readable heatmaps, and (3) plots the hidden representations of vision and language tokens as they pass through the transformer layers. In this paper, we demonstrate the functionalities of VL-InterpreT through the analysis of KD-VLP, an end-to-end pretraining vision-language multimodal transformer-based model, in the tasks of Visual Commonsense Reasoning (VCR) and WebQA, two visual question answering benchmarks. Furthermore, we also present a few interesting findings about multimodal transformer behaviors that were learned through our tool.Vectors of Cognitive AI: Self-OrganizationCognitive AI2022-02-24 | Panelists: Prof. Christoph von der Malsburg, Prof. György Buzsáki, Prof. Dave Ackley, Dr. Joscha Bach.
Biological and social agents are very different from our present approaches to technologically designed artificial agents. Technological systems are constructed “from outside in”: they extend a world with known, reliable functionality by forging a deterministic substrate into additional, required functions. This is true whether we are building a bicycle in a workshop or a learning algorithm in a software development environment. In contrast, biological systems (such as plants, or the mind of a human being) are growing “from inside out”: they organize an indeterministic substrate with unreliable properties into a structure that converges to serving the required function, and which will even self-heal and regrow when they are being damaged or disturbed. What can technological systems (and especially AI) learn from the self-organization of biology? What basic principles drive self-organization, and how do they lead to efficient, robust and adaptive implementations of intelligent information processing? How can we formally describe self-organizing systems in a computational context?In this panel, we discuss perspectives on self-organization in the context of AI, neuroscience and general computation.
The seminal contribution "Attention is all you need" (Vasvani et al. 2017), which introduced the Transformer algorithm, triggered a small revolution in machine learning. Unlike convolutional neural networks, which construct each feature out of a fixed neighborhood of signals, Transformers learn which data a feature on the next layer of a neural network should attend to. However, attention in neural networks is very different from the integrated attention in a human mind. In our minds, attention seems to be part of a top-down mechanism that actively creates a coherent, dynamic model of reality, and plays a crucial role in planning, inference, reflection and creative problem solving. Our consciousness appears to be involved in maintaining the control model of our attention.
In this panel, we want to discuss avenues into our understanding of attention, in the context of machine learning, cognitive science and future developments of AI.
Full program and references: cognitive-ai-panel.webflow.io/panels/attentionVectors of Cognitive AI: Motivation and AutonomyCognitive AI2021-12-28 | How can we conceptualize and construct artificial agents with rich autonomy? How can we use computational models to understand the agency of humans, and shape the collaboration between human and AI agents? Our panel brings a group of thinkers about artificial agency, motivation, emotion and sociality together, to discuss how intrinsic motivation gives rise to goal directed behavior, the organization of cognitive structure, multi agent collaboration and ethics.
Talks: Cristiano Castelfranchi: Grounding Sociality in Goal Theory
Christian Balkenius: Motivation, Emotion, and Attention
Dietrich Dörner: The Competence Motivation
Joscha Bach: Motivation for individual and collective agencyPanel on Representational Paradigms for Cognitive AICognitive AI2021-11-22 | There is a wide gap between current machine learning representations and the way in which our minds represent reality. Our mental representations are dynamic, coherent, unified (in the sense that we establish relationships between all our domains of knowledge, in the context of a global universe), and they are updated on the fly. In this panel, we bring some important thinkers and practitioners of cognitive science, robotics, AI and philosophy together to discuss representations for future generations of AI systems.
This is the first in a series of events on Cognitive Artificial Intelligence. The goal of Cognitive AI is to build and understand systems that can make sense of their environment, combine knowledge and perception, learn to act on domains they have not encountered before, make autonomous decisions and explain them, interact deeply with people and human society.
We are proud to welcome our panelists:
Mark Bickhard: Cognition and Truth Value
Stephen Grossberg: How Each Brain Makes a Mind: From Brain Resonances to Conscious Experiences
Yulia Sandamirskaya: Memory, intentionality, and autonomy enabled by neuronal attractor dynamics
Jerome Busemeyer: Modeling cognition and decision using quantum probability theory
Steven Rogers: What are the tenets for machine representations (artificial qualia?) that enable flexible behaviors?
Joscha Bach: Perception, Reflection and Coherence
Program: cognitive-ai-panel.webflow.io/programKnowledge Injection in Neural Networks: Panel DiscussionCognitive AI2021-11-12 | Gadi Singer, VP at Intel Labs, sits down with leading AI researchers and thought leaders: Gary Marcus, Luis Lamb, Vered Shwartz, Partha Talukdar. Watch as they discuss how injection of knowledge and neuro-symbolic methods can mitigate some of the drawbacks of neural networks.