Art of the ProblemThis video explores the journey of AI language models, from their modest beginnings through the development of OpenAI's GPT models. Our journey takes us through the key moments in generative neural network research involved in next word prediction. We delve into the early experiments with tiny language models in the 1980s, highlighting significant contributions by researchers like Jordan, who introduced Recurrent Neural Networks, and Elman, whose work on learning word boundaries revolutionized our understanding of language processing. It leaves us with a question: what is thought? Is simulated thought, thought? Featuring Noam Chomsky Douglas Hofstadter Michael I. Jordan Jeffrey Elman Geoffrey Hinton Ilya Sutskever Andrej Karpathy Yann LeCun and more. (Sam altman)
00:00 - Introduction 00:32 - hofstader's thoughts on chatGPT 01:00 - recap of supervised learning 01:55 - first paper on sequential learning 02:55 - first use of state units (RNN) 04:33 - first observation of word boundary detection 05:30 - first observation of word clustering 07:16 - first "large" language model Hinton/Sutskever 10:10 - sentiment neuron (Ilya | OpenAI) 12:30 - transformer explaination 15:50 - GPT-1 17:00 - GPT-2 17:55 - GPT-3 18:20 - In-context learning 19:40 - ChatGPT 21:10 - tool use 23:25 - philosophical question: what is thought?
ChatGPT: 30 Year History | How AI Learned to TalkArt of the Problem2023-11-27 | This video explores the journey of AI language models, from their modest beginnings through the development of OpenAI's GPT models. Our journey takes us through the key moments in generative neural network research involved in next word prediction. We delve into the early experiments with tiny language models in the 1980s, highlighting significant contributions by researchers like Jordan, who introduced Recurrent Neural Networks, and Elman, whose work on learning word boundaries revolutionized our understanding of language processing. It leaves us with a question: what is thought? Is simulated thought, thought? Featuring Noam Chomsky Douglas Hofstadter Michael I. Jordan Jeffrey Elman Geoffrey Hinton Ilya Sutskever Andrej Karpathy Yann LeCun and more. (Sam altman)
00:00 - Introduction 00:32 - hofstader's thoughts on chatGPT 01:00 - recap of supervised learning 01:55 - first paper on sequential learning 02:55 - first use of state units (RNN) 04:33 - first observation of word boundary detection 05:30 - first observation of word clustering 07:16 - first "large" language model Hinton/Sutskever 10:10 - sentiment neuron (Ilya | OpenAI) 12:30 - transformer explaination 15:50 - GPT-1 17:00 - GPT-2 17:55 - GPT-3 18:20 - In-context learning 19:40 - ChatGPT 21:10 - tool use 23:25 - philosophical question: what is thought?What Makes Human Intelligence So Special?Art of the Problem2024-04-11 | FULL VIDEO: youtube.com/watch?v=5EcQ1IcEMFQIntelligence: A 600 Million Year StoryArt of the Problem2024-04-07 | FULL VIDEO: youtube.com/watch?v=5EcQ1IcEMFQHow Intelligence Evolved | A 600 million year story.Art of the Problem2024-04-06 | This video follows the evolution of intelligence, from the simple nerve nets to the complex neural networks in humans that enable consciousness, learning, and imagination.
00:00 - Introduction 01:13 - nerve nets 01:29 - steering 02:20 - reinforcement learning 06:23 - mental simulation 08:50 - 3rd person simulation 11:50 - language3 Layers of Learning #evolution #ai #machinelearning #scienceArt of the Problem2024-01-22 | Full video here: youtube.com/watch?v=yLAwDEfzqRw
Unlock the essence of intelligence by exploring the layers of learning. This video follows the progression of evolutionary, experiential, and abstract learning, forming the bedrock of artificial intelligence. It provides insight into various learning paradigms including unsupervised learning, supervised learning, reinforcement learning, association learning, and the ingenuity of genetic algorithms. As part of the narrative, the essence of language and its role in advancing intelligence is explored. This is Part 2 of my enlightening AI/Deep Learning series, serving as a bridge to understanding modern AI frameworks like ChatGPT and GPT models. Embark on this intellectual journey to grasp how the lineage of learning has sculpted today's AI landscapeHow the first neural network learned (backpropogation) #ai #machinelearningArt of the Problem2024-01-12 | FULL VIDEO: youtube.com/watch?v=r1U6fenGTrUWhat is a distributed representation? #ai #machinelearning #technologyArt of the Problem2024-01-09 | I explain a distributed representation using a piano analogy. Full video: youtube.com/watch?v=e5xKayCBOeUI asked AI to draw itself (DALL·E 2) #aiartArt of the Problem2023-12-19 | Several have asked me about the visualizations I used in my last video. So sharing several I made which didn't get included in the main video (youtube.com/watch?v=OFS90-FX6pg)Origin of BatteriesArt of the Problem2023-12-19 | Full video: youtube.com/watch?v=8jlMuBn6Zow
The Voltaic pile and the discovery of electromagnetism. These technologies lead us to electromagnetic communication systems...and a communications revolution. Featuring observations by Alessandro Volta & Hans Christian Oersted.Computers cant roll diceArt of the Problem2023-12-17 | How do computers generate random numbers if they can't roll dice? Random number generators.Why Deep Neural Networks Beat Shallow Ones. #ai #technology #scienceArt of the Problem2023-12-14 | FULL VIDEO: youtube.com/watch?v=e5xKayCBOeU Why do neural networks need to be deep? In this video we explore how neural networks transform perceptions into concepts. This video unravels the mystery behind how machines interpret input data, such as images or sounds, and categorize them into recognizable concepts. From the basic structure of neurons and layers to the intricate play of weights and activations, get a comprehensive understanding of the learning process. Explore real-world applications like handwriting recognition and how layered processing aids in effective data categorization. Whether it's distinguishing between summer and winter days based on temperature and humidity or recognizing handwritten digits, the magic lies in the layered architecture of neural networks. This video elucidates how these artificial networks mimic the human brain's ability to interpret, recognize, and reason, marking a significant stride in AI research towards creating machines capable of reasoning. Why layers matter.How AI Systems Represent Concepts #ai #ailearningArt of the Problem2023-12-12 | How Neural Networks Define Concepts
Why do neural networks need to be deep? In this video we explore how neural networks transform perceptions into concepts. This video unravels the mystery behind how machines interpret input data, such as images or sounds, and categorize them into recognizable concepts. From the basic structure of neurons and layers to the intricate play of weights and activations, get a comprehensive understanding of the learning process. Explore real-world applications like handwriting recognition and how layered processing aids in effective data categorization. Whether it's distinguishing between summer and winter days based on temperature and humidity or recognizing handwritten digits, the magic lies in the layered architecture of neural networks. This video elucidates how these artificial networks mimic the human brain's ability to interpret, recognize, and reason, marking a significant stride in AI research towards creating machines capable of reasoning. Why layers matter.
The Voltaic pile and the discovery of electromagnetism. These technologies lead us to electromagnetic communication systems...and a communications revolution. Featuring observations by Alessandro Volta & Hans Christian Oersted. Long before tesla there was this...Is Intelligence related to prediction? #ai #chatgpt #technology #ailearningArt of the Problem2023-12-09 | What is the connection between prediction and intelligence? FULL VIDEO: youtube.com/watch?v=OFS90-FX6pg
This video explores the journey of language models, from their modest beginnings through the development of OpenAI's GPT models & hints at Q* / Google Gemini. Our journey takes us through the key moments in neural network research involved in next word prediction. We delve into the early experiments with tiny language models in the 1980s, highlighting significant contributions by researchers like Jordan, who introduced Recurrent Neural Networks, and Elman, whose work on learning word boundaries revolutionized our understanding of language processing. Featuring Noam Chomsky Douglas Hofstadter Michael I. Jordan Jeffrey Elman Geoffrey Hinton Ilya Sutskever Andrej Karpathy Yann LeCun and more. (Sam altman)The only AI moat is Community #ai #education #chatgptArt of the Problem2023-12-07 | FULL TALK: youtube.com/watch?v=32R81RylpZk | Designing educational experiences in the age of AI: My journey with Pixar & Khan Academy (Brit Cruise / Art of the Problem)Designing Educational Experiences with Pixar and Khan Academy | Brit Cruise | Talk at GoogleArt of the Problem2023-12-06 | Brit Cruise | Talk at Google: My journey with Pixar & Khan Academy, leading to Pixar in a Box and Story Xperiential
Key moments in this talk include:
00:00 - Introduction 01:00 - Art of the Problem origin story. 03:40 - My work & experiments with Khan Academy & Sal Khan 10:30 - The process behind 'Pixar in a Box' and 'Imagineering in a Box' 14:00 - Startup experiment with Mystery Science 16:00 - Forming my company X in a Box 16:36 - Assume YouTube exists... 17:14 - From Pixar in a Box to Story Xperiential 19:30 - Social learning insight 24:59 - Thoughts on AI in Education 31:40 - Q&A - discussions on the future of AI in education
This is for educators, students, tech enthusiasts, and anyone passionate about the intersection of education and technology.
Hope you enjoyed, stay tuned for more!Why Bitcoin has Value #bitcoin #bitcoinnews #bitcoinminingArt of the Problem2023-12-05 | FULL VIDEO: youtube.com/watch?v=ZKwqNgG-Sv4 Overview of the key insights behind Bitcoin. Covers the history of money, value, gold, blockchain, proof of work, hash function & mining. This is the best overview of bitcoin for those who want a deep intuition. We introduce the problem of sending cash in electronic form over the internet and the need for a reliable electronic cash method, and introduces Bitcoin as a solution.The Pattern of Prime Numbers #shortsArt of the Problem2023-12-03 | Full video: youtube.com/watch?v=3RfYfMjZ5w0Can you detect a coin flip? #quiz #gambling #statisticsArt of the Problem2023-12-01 | Can you detect coin flips vs guesses? If you enjoyed this check out my full series on cryptography: youtube.com/watch?v=lICOtR078Gw&list=PLB4D701646DAF0817&ab_channel=ArtoftheProblemChatGPT: A 30 Year History #chatgpt #openai #aiArt of the Problem2023-11-30 | FULL VIDEO: youtube.com/watch?v=OFS90-FX6pg This video explores the journey of language models, from their modest beginnings through the development of OpenAI's GPT models & hints at Q*. Our journey takes us through the key moments in neural network research involved in next word prediction. We delve into the early experiments with tiny language models in the 1980s, highlighting significant contributions by researchers like Jordan, who introduced Recurrent Neural Networks, and Elman, whose work on learning word boundaries revolutionized our understanding of language processing. Featuring Noam Chomsky Douglas Hofstadter Michael I. Jordan Jeffrey Elman Geoffrey Hinton Ilya Sutskever Andrej Karpathy Yann LeCun and more. (Sam altman)Why Transformers Are So PowerfulArt of the Problem2023-09-10 | I find most explanations get lost in the details so i challenged myself to come up with a one sentence description. It's a new kind of layer capable of adapting its connection weights based on input context. This allows one layer to do what would have taken many. I hope this helps you!Art of the Problem Live StreamArt of the Problem2023-01-24 | ...Say You Love Me (2020 Experimental Documentary)Art of the Problem2020-12-22 | This experimental film was born in 2018 when three ideas collided in my head. I was 1. bored with polished reenactment documentaries, 2. missing the fly on the wall exploration of random people and 3. had the realization that a smart phone can replace the typical crew. So I tried to make a film 'out of nothing' as an editing challenge, by following several people for a year in 2019.How AI Learns ConceptsArt of the Problem2020-07-07 | Why do neural networks need to be deep? In this video we explore how neural networks transform perceptions into concepts. This video unravels the mystery behind how machines interpret input data, such as images or sounds, and categorize them into recognizable concepts. From the basic structure of neurons and layers to the intricate play of weights and activations, get a comprehensive understanding of the learning process. Explore real-world applications like handwriting recognition and how layered processing aids in effective data categorization. Whether it's distinguishing between summer and winter days based on temperature and humidity or recognizing handwritten digits, the magic lies in the layered architecture of neural networks. This video elucidates how these artificial networks mimic the human brain's ability to interpret, recognize, and reason, marking a significant stride in AI research towards creating machines capable of reasoning. Why layers matter.How Recommender Systems Work (Netflix/Amazon)Art of the Problem2020-02-28 | The key insights behind content and collaborative filtering (Matrix Factorization). How Amazon, Netflix, Facebook and others predict what you will like.
Paper in this video: Matrix Factorization Techniques for Recommender Systems https://www.inf.unibz.it/~ricci/ISR/papers/ieeecomputer.pdfHow AI Learns (Backpropagation 101)Art of the Problem2019-11-14 | Explore the fundamental process of backpropagation in artificial intelligence (AI). This video show how neural networks learn and improve by adapting to data during each training phase. Backpropagation is crucial in calculating errors and updating the network's weights to enhance decision-making within the AI system. This tutorial breaks down the core mechanics of neural network training, making it easier to understand for individuals interested in AI, machine learning, and network training. By understanding backpropagation, viewers can better grasp how neural networks evolve to process information more accurately. Keywords: rosenblatt, AI, Artificial Intelligence, Neural Networks, Backpropagation, Machine Learning, Network Training, Data Adaptation, Error Calculation, Performance Tuning, Decision Making.Secret Sharing Explained VisuallyArt of the Problem2019-10-22 | The IEEE Information Theory Society presents an overview of Adi Shamir's 1979 paper on secret sharing. This is part of our series on the greatest papers from information theory. Link to playlist: youtube.com/playlist?list=PLbg3ZX2pWlgJOTf5YXNq-rdXXuUkJTXHm
Paper featured in this video: http://users.cms.caltech.edu/~vidick/teaching/101_crypto/Shamir1979.pdf
Support this program: patreon.com/artoftheproblemFrom Bacteria to Humans (Evolution of Learning)Art of the Problem2019-06-27 | This video follows the progression of evolutionary, experiential, and abstract learning, forming the bedrock of artificial intelligence. It provides insight into various learning paradigms including unsupervised learning, supervised learning, reinforcement learning, association learning, and the ingenuity of genetic algorithms. As part of the narrative, the essence of language and its role in advancing intelligence is explored. This is Part 2 of my enlightening AI/Deep Learning series, serving as a bridge to understanding modern AI frameworks like ChatGPT and GPT models. Embark on this intellectual journey to grasp how the lineage of learning has sculpted today's AI landscapeWhat is Deep Learning?Art of the Problem2019-05-17 | The story of deep learning, a technology that has evolved from the humble beginnings of neural networks into THE dominant force in artificial intelligence. This journey begins in the mid-20th century, tracing the evolution from traditional programming paradigms to a radical shift towards self-learning systems. Explore how deep learning systems, inspired by the human brain's ability to learn from experience, are now capable of outperforming humans in numerous tasks once thought to be far beyond machines. Discover the profound impact of this technology on various domains, from mastering complex games like Chess to revolutionizing language translation and image recognition. As we unravel the core principles of deep learning, learn about the potential and the challenges that lie ahead in harnessing the power of neural networks to replicate human intuition. This episode is a doorway to understanding the essence of machine learning, the significance of distributed representation over symbolic computation, and the boundless possibilities that deep learning unveils in our quest towards artificial intelligence. Join us in this enlightening exploration, part of our AI/Deep Learning series, as we delve into a future where machines not only learn but also think in patterns, reshaping the realms of computing and human endeavor.
Keywords: Deep Learning, Neural Networks, Artificial Intelligence, Machine Learning, Self-learning Systems, Chess, Language Translation, Image Recognition, Distributed Representation, Symbolic Computation, Traditional Programming, Human Intuition, Computer Vision, Neural Activity, Evolution of AI, Deep Blue, AlphaZero, AI Competitions, ImageNet, Geoffrey Hinton, Parallel Computation, Human Mind, IntelligenceTEASER: Episode 5 (Artificial Intelligence/Deep Learning)Art of the Problem2019-04-15 | Welcome to Art of the Problem. This is a teaser for episode 5 on AI/Deep Learning. I've also produced episodes on Cryptography, Information Theory, Computer Science & Bitcoin/Blockchain & moreFuncionamiento de Bitcoin: Confianza mecánicaArt of the Problem2018-12-19 | Qué es, cómo funciona y cuál es la importancia de Bitcoin: Entender cómo nació está tecnología y qué papel se espera que juegue como innovación tecnológica y económicaThe Beauty of Lempel-Ziv CompressionArt of the Problem2018-12-12 | Information Theory Society presents how the Lempel-Ziv lossless compression algorithm works. It was published in 1978 (LZ78) and improved by Welch in 1984 leading to the popular LZW compression. This video covers the key insight in their paper: how to construct a codebook that doesn't need to be shared with the sender. It's a subtle, yet beautiful idea which is still in use today.Hamming & low density parity check codesArt of the Problem2018-11-20 | Information Theory Society presents the key concepts needed to understand low-density parity-check codes (LDPC codes). It's a blend of repetition codes, parity check bits and hamming codes. This paper was highly influential and has over 9000 citations.Bitcoin Documentary | The Trust MachineArt of the Problem2018-05-28 | Overview of the key insights behind Bitcoin whitepaper. Covers the history of money, value, gold, blockchain, proof of work, hash function & mining. This is the best overview of bitcoin for those who want a deep intuition. We introduce the problem of sending cash in electronic form over the internet and the need for a reliable electronic cash method, and introduces Bitcoin as a solution. Bitcoin is a decentralized digital currency that operates on a shared ledger model and eliminates areas of trust through distributed responsibilities for validating and updating transactions among multiple nodes. The speaker describes the process of Bitcoin mining, which involves solving complex mathematical problems to add new blocks to the blockchain, and explains the consensus mechanism of "proof of work" that motivates miners to participate in the network. Overall, the video provides insight into the security, reliability, and significance of Bitcoin in the modern global community. bitcoin 40k / halvingThe Trust Machine: TeaserArt of the Problem2018-05-23 | This is an an introduction to a full video you can watch here: youtube.com/watch?v=ZKwqNgG-Sv4How space-time codes work (5G networks)Art of the Problem2017-10-23 | Information Theory Society presents a brief history of wireless communication (radio) leading to the idea of multiple-antenna wireless systems (MIMO) and space-time codes. 5G networks
Written by: Brit Cruise Matthieu Bloch Michelle Effros (corrected from video) Suhas Diggavi (corrected from video)How internet communication works: Network CodingArt of the Problem2017-10-10 | Information Theory Society presents a brief history of internet communication and packet switched networks leading to the idea of network coding.
Paper featured in this video: Network Information Flow - http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=F8130EA435749A8E44E666D09CCCDC8D?doi=10.1.1.534.2207&rep=rep1&type=pdfP = NP Explained Visually (Big O Notation & Complexity Theory)Art of the Problem2017-10-05 | A visual explanation of p vs. np and the difference between polynomial vs exponential growth. Dive deep into the enigma of complexity theory with my exploration of P vs. NP. This video delves into the fundamental principles that govern the computational universe, influenced by the brilliant minds of Von Neumann and Turing.
The origins of the universal machine and the Von Neumann architecture. The conceptual leap from simple operations to complex algorithms. How Von Neumann's EDVAC paved the way for modern computing. The bottlenecks of time and space that challenge computation. John Nash's groundbreaking perspective on computational growth. The distinction between Polynomial (P) and Exponential (EXP) time problems. The intriguing world of "easy to solve" vs. "hard to crack" algorithms. The captivating realm of NP-complete problems and their significance in computing. The 'shape of growth curve' and its impact on classifying computational problems. Nested loops and their contribution to algorithmic complexity. The concept of one-way functions and their critical role in computer security. The practical implications of solving NP-complete problems. The ongoing quest to define the boundary between P and NP. The million-dollar question that stands at the pinnacle of computer science. Join us on this intellectual voyage as we unravel the secrets of computational requirements, the intricacy of algorithms, and the pivotal problem that has mystified some of the greatest minds in mathematics and computer science.
Whether you're a seasoned programmer, a mathematics enthusiast, or simply curious about the inner workings of computers, this video is your gateway to understanding one of the most profound questions in computer science: Is P equal to NP?
Support new content: patreon.com/artoftheproblemTuring machines explained visuallyArt of the Problem2017-05-17 | A Turing machine is a model of a machine which can mimic any other (known as a universal machine). What we call "computable" is whatever a Turing machine can write down. This video is about how it was conceived and why it works using physical explaination. This is part of my Computer Science series (youtube.com/watch?v=fjMU-km-Cso&list=PLbg3ZX2pWlgI_ej6ZhGd45-cPoWLZD9pT)What is a computer? (the history covering Leibniz, Babbage & Lovelace)Art of the Problem2016-11-02 | the origin and history of computers from Gottfried Leibniz's dreams of mechanizing mental work through Charles Babbage's analytic engine. It ends with Ada Lovelace's famous insights about computer programming...ushering a new era of Computer Science which explodes in the 20th Century.
To learn more about Lovelace check out this article: http://blog.stephenwolfram.com/2015/12/untangling-the-tale-of-ada-lovelaceWhat is Logic?Art of the Problem2016-08-28 | Introduction to Aristotle's contributions to logic are explored in PART 3 of our series on Computer Science. This video explains: deduction, abstraction, law of non contradiction & syllogisms. Please support this program: patreon.com/artoftheproblem or Bitcoin: 1J29nKVys3anVaQNnyW8DBkD4vCzFxdB2rWhat is an Algorithm?Art of the Problem2016-05-19 | Two essential ideas behind algorithms are explored. This is part 2 of our series on Computer Science.What is Computer Science? | The Turing testArt of the Problem2016-04-04 | This video explores the turing test to explain declarative vs. procedural knowledge. This is PART 1 of my series on Computer Science, grab some popcorn enjoy :)
computer engineering vs computer science mit crash courseThe Origin of Computer Science (Leibniz, Boole, Babbage, Turing)Art of the Problem2016-01-20 | Introduction to our series on the origins and history of Computer Science. The story of a collision between math and philosophy featuring Leibniz, Boole, Babbage, Turing, Shannon, Russel, Gödel and many more...
Support this independent effort on Patreon: https://www.patreon.com/artoftheprobl...
computer science introduction / intro to computer scienceEpisode 3 TeaserArt of the Problem2015-10-04 | Welcome to Art of the Problem. This is a teaser for episode #3 on CS (computability theory) We've produced episodes on Cryptography, Information Theory, Computer Science, Bitcoin & AI:Deep Learning so far are always working on more contentThe search for Extraterrestrial IntelligenceArt of the Problem2014-04-15 | How can we know if alien signals are intelligent? What does it mean to be intelligent? This final chapter features Carl Sagan, Phillip Morrison, Kent Cullers (SETI - The search for extraterrestrial intelligence)
Link to paper by Doyle: web.archive.org/web/20060903080122/http://faculty.vetmed.ucdavis.edu/faculty/bjmccowan/Pubs/McCowanetal.JCP.2002.pdfError correction codes (Hamming coding)Art of the Problem2014-01-08 | How do we communicate digital information reliably in the presence of noise? Hamming's (7,4) error correction code demonstrates how parity bits can help us recover from transmission/storage errors. This must be taken into account when thinking about Shannon's idea of channel capacity and information rate. (hamming code, error correction)Entropy is the limit of compression (Huffman Coding)Art of the Problem2014-01-05 | What is the limit of compression? Huffman codes are introduced in order to demonstrate how entropy is the ultimate limit of compression.Claude Shannons Information Entropy (Physical Analogy)Art of the Problem2013-11-27 | Entropy is a measure of the uncertainty in a random variable (message source). Claude Shannon defines the "bit" as the unit of entropy (which is the uncertainty of a fair coin flip). In this video information entropy is introduced intuitively using bounce machines & yes/no questions.
Note: This analogy applies to higher order approximations, we simply create a machine for each state and average over all machines!Claude Shannon: A Mathematical Theory of CommunicationArt of the Problem2013-06-27 | Claude Shannon demonstrated how to generate "english looking" text using Markov chains and how this gives a satisfactory representation of the statistical structure of any message. He uses this model as a framework with which to define 'information sources' and how they should be measured.