Science, Technology & the Future
John Wilkins - Knowledge, Understanding & Epistemic Communities
updated
"The Extended Arm is an eleven-degree-of-freedom manipulator with wrist flexion, wrist rotation, thumb rotation, individual finger flexion, with each finger splitting open, so each finger can potentially be a gripper in itself. The artist’s fingers rest on a panel of switches enabling the selection of pre-programmed sequences of finger, thumb and wrist movements. The clicking fingers, the compressed air and solenoid generate the sounds when performing. The Extended Arm extends the artist’s right arm to primate proportions. "
Many thanks for tuning in!
Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series?
Please fill out this form: docs.google.com/forms/d/1mr9PIfq2ZYlQsXRIn5BcLH2onbiSI7g79mOH_AFCdIk
Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Ethereum: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
- Patreon: patreon.com/scifuture
c) Sharing the media SciFuture creates
Kind regards,
Adam Ford
- Science, Technology & the Future - #SciFuture - http://scifuture.org
Many thanks for tuning in!
Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series?
Please fill out this form: docs.google.com/forms/d/1mr9PIfq2ZYlQsXRIn5BcLH2onbiSI7g79mOH_AFCdIk
Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Ethereum: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
- Patreon: patreon.com/scifuture
c) Sharing the media SciFuture creates
Kind regards,
Adam Ford
- Science, Technology & the Future - #SciFuture - http://scifuture.org
0:00 Intro
1:00 Monica Anderson on what is understanding?
2:00 Joscha Bach answers what is understanding?
3:12 discrepancies in descriptions of understanding
3:54 Joscha on creativity vs deciding and understanding
7:19 Context free models vs context containing models (Monica)
10:18 Modelling & Embodiment (Joscha)
14:35 Language models (Monica & Joscha)
17:17 Are causal models required for understanding?
18:26 Can imitation become understanding?
20:26 Can we share understanding?
21:08 Systematic diff btw people driving cars vs self driving cars
22:45 Embodiment and symbol grounding. Do people share symbol groundings?
24:02 How is symbol grounding done? (Ben Goertzel, Monica, Joscha)
25:42 Language understanding, and what's required to achieve it, turing test
29:57 Trustworthy AI, human aesthetics - how to build a god, or keep the world pre-singularity
34:28 Trustworthyness of useful tools vs superhuman AGI
38:57 Open-ended intelligence
41:34 Why a panel on trustworthy AI?
44:24 Proscription of intentionality onto weak AI - does the illusion make us happy?
46:19 Sophia the robot - useful simulacrum or an abomination? a salty divergence of opinion
49:47 Automated confabulation in GPT3 - confabulation vs real explanation
52:40 Verification of levels of trustworthyness in AGI - how far can we take it?
58:48 Can everything ethically important be understood by humans? (Joscha)
59:19 GPT-3 confabulation vs human confabulation
01:00:22 Do we want AI to follow anthropomorphic ethics, or open ethics? Status quo ethics, or variable ethics?
01:07:24 Prefered game conditions in a future shared with AGI
01:12:10 Farewell Joscha Bach, welcome Hugo de Garis
01:13:27 Post-singularity family planning (and identity)
01:15:00 Governmental interest in AGI etc
01:18:31 Government interest in the threats of AI/AGI
01:21:07 Image generation, deep fakes and technology to validate truth
01:25:09 AI research in China vs the west
01:27:48 AGI and geopolitics
01:29:32 AGI chip to speed up pattern matching
01:35:52 Pandemic preparedness - how will the world deal with a far worse pandemic than Covid-19?
Can the blackbox problem be fully solved without machine understanding (the AI actually ‘understanding’ rather than i.e. merely making predictions across massive datasets)?
Will an add-on explanation modules be enough to make AI trustworthy?
Can imitation become understanding? Or do we need to develop an entirely different approach to AI than the
Many thanks for tuning in!
Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series?
Please fill out this form: docs.google.com/forms/d/1mr9PIfq2ZYlQsXRIn5BcLH2onbiSI7g79mOH_AFCdIk
Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Ethereum: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
- Patreon: patreon.com/scifuture
c) Sharing the media SciFuture creates
Kind regards,
Adam Ford
- Science, Technology & the Future - #SciFuture - http://scifuture.org
We are experiencing a revolution at the level of Epistemology which will affect much more than just the field of Machine Learning. We want to add more of these new Methods to our standard problem solving toolkit, but we need to understand the tradeoffs.
Bio: Monica Anderson, MSCS, is an independent AI and ML researcher and founder of Syntience Inc.
Her work has focused on the Epistemology of AI but all her theory is based on her experiences of design and implementation of (Human Language) Understanding Machines based on Deep Discrete Neuron Networks since Jan 1, 2001.
She can adopt a Holistic or Reductionist stance as needed, and wants to teach others how to switch. Her current projects include creating a social medium where chat messages are routed by an Understanding Machine. She has been awarded a handful of patents in this field.
She is an ex-Googler, has facilitated 100+ Bay Area AI meetup sessions over 5 years, and plays keyboards and Bridge.
Many thanks for tuning in!
Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series?
Please fill out this form: docs.google.com/forms/d/1mr9PIfq2ZYlQsXRIn5BcLH2onbiSI7g79mOH_AFCdIk
Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Ethereum: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
- Patreon: patreon.com/scifuture
c) Sharing the media SciFuture creates
Kind regards,
Adam Ford
- Science, Technology & the Future - #SciFuture - http://scifuture.org
How can we understand agency in the context of the cooperation and competition between AI, humans and other organisms?
0:00 Introduction
1:14 Presentation starts
1:48 Spirits & western confusion about consciousness
5:48 Genesis: an updated version of the origin story (6 stages)
13:30 The history of studying agency
15:07 Today's models & AI systems
18:35 Cybernetics: modeling in the service of control
22:09 Computation vs. cybernetics
24:29 How do neurons compute minds?
26:45 Neural circuits in artificial neural networks
28:17 Is the circuit metaphor wrong? Self organization in biological neurons & Neural Darwinism
32:22 Conscious seed theory (technological design vs. organic growth)
38:31 Hierarchy & design constraints of causal systems, groups, state governments & agents
43:55 The society of mind, self regulation & the consciousness prior
48:53 Attention as an agent & role of consciousness
51:19 Society of minds: human intellect & civilization intellect
53:44 Stages of intelligent agency (societal agency, Maslow's hierarchy, "sacredness")
57:57 Principles for emergent higher level agency (7 virtues)
1:02:20 The alignment problem
1:07:26 Q&A
This talk was part of the ‘Stepping Into the Future‘ conference.
http://www.scifuture.org/agency-in-age-of-machines-joscha-bach
Bio: Joscha Bach, Ph.D. is an AI researcher who worked and published about cognitive architectures, mental representation, emotion, social modeling, and multi-agent systems. He earned his Ph.D. in cognitive science from the University of Osnabrück, Germany, and has built computational models of motivated decision making, perception, categorization, and concept-formation. He is especially interested in the philosophy of AI and in the augmentation of the human mind.
Joscha has taught computer science, AI, and cognitive science at the Humboldt-University of Berlin and the Institute for Cognitive Science at Osnabrück.
His book “Principles of Synthetic Intelligence – PSI: An Architecture of Motivated Cognition” (Oxford University Press) is available on amazon.
amazon.com/Principles-Synthetic-Intelligence-PSI-Architectures/dp/0195370678
Many thanks for tuning in!
Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series?
Please fill out this form: docs.google.com/forms/d/1mr9PIfq2ZYlQsXRIn5BcLH2onbiSI7g79mOH_AFCdIk
Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Ethereum: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
- Patreon: patreon.com/scifuture
c) Sharing the media SciFuture creates
Kind regards,
Adam Ford
- Science, Technology & the Future - #SciFuture - http://scifuture.org
One paradigm considers superintelligences as resembling modern deep reinforcement learning systems, obsessively concerned with optimizing particular goal functions. Another considers superintelligences as open-ended, complex evolving systems, ongoingly balancing drives
toward individuation and radical self-transcendence in a paraconsistent way. In this talk I will argue that the open-ended conception of superintelligence is both more desirable and more realistic, and will discuss how concrete work being done today on projects like OpenCog Hyperon, SingularityNET and Hypercycle potentially paves the way for a path through beneficial decentralized integrative AGI and on to open-ended superintelligence and ultimately the Singularity.
Bio: In May 2007, Goertzel spoke at a Google tech talk about his approach to creating artificial general intelligence. He defines intelligence as the ability to detect patterns in the world and in the agent itself, measurable in terms of emergent behavior of “achieving complex goals in complex environments”. A “baby-like” artificial intelligence is initialized, then trained as an agent in a simulated or virtual world such as Second Life to produce a more powerful intelligence. Knowledge is represented in a network whose nodes and links carry probabilistic truth values as well as “attention values”, with the attention values resembling the weights in a neural network. Several algorithms operate on this network, the central one being a combination of a probabilistic inference engine and a custom version of evolutionary programming.
This talk is part of the ‘Stepping Into the Future‘ conference. http://www.scifuture.org/open-ended-vs-closed-minded-conceptions-of-superintelligence
Many thanks for tuning in!
Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series?
Please fill out this form: docs.google.com/forms/d/1mr9PIfq2ZYlQsXRIn5BcLH2onbiSI7g79mOH_AFCdIk
Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Ethereum: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
- Patreon: patreon.com/scifuture
c) Sharing the media SciFuture creates
Kind regards,
Adam Ford
- Science, Technology & the Future - #SciFuture - http://scifuture.org
Synopsis: Does the History, Dynamics, and Structure of our Universe give any evidence that it is inherently “Good”? Does it appear to be statistically protective of adapted complexity and intelligence? Which aspects of the big history of our universe appear to be random? Which are predictable? What drives universal and societal accelerating change, and why have they both been so stable? What has developed progressively in our universe, as opposed to merely evolving randomly? Will humanity’s future be to venture to the stars (outer space) or will we increasingly escape our physical universe, into physical and virtual inner space (the transcension hypothesis)? In Earth’s big history, what can we say about what has survived and improved? Do we see any progressive improvement in humanity’s thoughts or actions? When is anthropogenic risk existential or developmental (growing pains)? In either case, how can we minimize such risk? What values do well-built networks have? What can we learn about the nature of our most adaptive complex networks, to improve our personal, team, organizational, societal, global, and universal futures? I’ll touch on each of these vital questions, which I’ve been researching and writing about since 1999, and discussing with a community of scholars at Evo-Devo Universe (join us!) since 2008.
For fun background reading, see John’s Goodness of the Universe post on Centauri Dreams, and “Evolutionary Development: A Universal Perspective”, 2019.
John writes about Foresight Development (personal, team, organizational, societal, global, and universal), Accelerating Change, Evolutionary Development (Evo-Devo), Complex Adaptive Systems, Big History, Astrobiology, Outer and Inner Space, Human-Machine Merger, the Future of AI, Neuroscience, Mind Uploading, Cryonics and Brain Preservation, Postbiological Life, and the Values of Well-Built Networks.
He is CEO of Foresight University, founder of the Acceleration Studies Foundation, and co-founder of the Evo-Devo Universe research community, and the Brain Preservation Foundation. He is editor of Evolution, Development, and Complexity (Springer 2019), and Introduction to Foresight: Personal, Team, and Organizational Adaptiveness (Foresight U Press 2022). He is also author of The Transcension Hypothesis (2011), the proposal that universal development guides leading adaptive networks increasingly into physical and virtual inner space.
A talk for the ‘Stepping into the Future‘ conference (April 2022).
http://www.scifuture.org/the-goodness-of-the-universe-outer-space-inner-space-and-the-future-of-networks-w-john-smart
Many thanks for tuning in!
Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series?
Please fill out this form: docs.google.com/forms/d/1mr9PIfq2ZYlQsXRIn5BcLH2onbiSI7g79mOH_AFCdIk
Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Ethereum: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
- Patreon: patreon.com/scifuture
c) Sharing the media SciFuture creates
Kind regards,
Adam Ford
- Science, Technology & the Future - #SciFuture - http://scifuture.org
What is Alignment?
Algorithms are shaping the present and will shape the future ever more strongly. It is crucially important that these powerful algorithms be aligned – that they act in the interests of their designers, their users, and humanity as a whole. Failure to align them could lead to catastrophic results.
Our long experience in the field of AI safety has identified the key bottleneck for solving alignment: concept extrapolation.
What is Concept Extrapolation?
Algorithms typically fail when they are confronted with new situations – they go out of distribution. Their training data will never be enough to deal with all unexpected situations – thus an AI will need to safely extend key concepts and goals, similarly – or better – to how humans do it.
This is concept extrapolation, explained in more details in this sequence. Solving the concept extrapolation problem is both necessary and almost sufficient for solving the whole AI alignment problem.
This talk is part of the ‘Stepping Into the Future‘ conference.
Bio: Dr Stuart Armstrong, Co-Founder and Chief Research Officer
Previously a Researcher at the University of Oxford’s Future of Humanity Institute, Stuart is a mathematician and philosopher and the originator of the value extrapolation approach to artificial intelligence alignment. He has extensive expertise in AI alignment research, having pioneered such ideas as interruptibility, low-impact AIs, counterfactual Oracle AIs, the difficulty/impossibility of AIs learning human preferences without assumptions, and how to nevertheless learn these preferences. Along with journal and conference publications, he posts his research extensively on the Alignment Forum.
Many thanks for tuning in!
Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series?
Please fill out this form: docs.google.com/forms/d/1mr9PIfq2ZYlQsXRIn5BcLH2onbiSI7g79mOH_AFCdIk
Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Ethereum: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
- Patreon: patreon.com/scifuture
c) Sharing the media SciFuture creates
Kind regards,
Adam Ford
- Science, Technology & the Future - #SciFuture - http://scifuture.org
For more detail, see his full talk 'The Goodness of the Universe: Outer Space, Inner Space, and the Future of Networks' here: http://www.scifuture.org/the-goodness-of-the-universe-outer-space-inner-space-and-the-future-of-networks-w-john-smart
Many thanks for tuning in!
Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series?
Please fill out this form: docs.google.com/forms/d/1mr9PIfq2ZYlQsXRIn5BcLH2onbiSI7g79mOH_AFCdIk
Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Ethereum: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
- Patreon: patreon.com/scifuture
c) Sharing the media SciFuture creates
Kind regards,
Adam Ford
- Science, Technology & the Future - #SciFuture - http://scifuture.org
For some fun background reading, see ‘What is the upper limit of value?‘ which Anders Sandberg co-authored with David Manheim.
This talk is part of the ‘Stepping Into the Future‘ conference.
Anders Sandberg is a senior research fellow at the Future of Humanity Institute at the University of Oxford and research associate at the Institute for Future Studies in Stockholm. Anders background is computational neuroscience, but for the past 20 years he has been working on neuroethics, global catastrophic risk, long-range futures and reasoning about uncertainty.
Many thanks for tuning in!
Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series?
Please fill out this form: docs.google.com/forms/d/1mr9PIfq2ZYlQsXRIn5BcLH2onbiSI7g79mOH_AFCdIk
Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Ethereum: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
- Patreon: patreon.com/scifuture
c) Sharing the media SciFuture creates
Kind regards,
Adam Ford
- Science, Technology & the Future - #SciFuture - http://scifuture.org
Now imagine if every moment in your life could be as good as this experience, or even better. Other things being equal, wouldn’t it be nice if we had higher quality lives?
Panel discussion at the 'Stepping Into the Future' conference featuring David Pearce and Andrés Gómez Emilsson. http://www.scifuture.org/engineering-paradise-a-panel-w-david-pearce-mike-johnson-andres-gomez-emilsson
For much of history talk of ‘paradise engineering’ would simply be dismissed as utopian dreaming. Though throughout the course of civilization humanity has been trying to improve it’s lot by manipulating it’s environment in innumerable different ways – yet, to be honest on the inside we’re not significantly happier now than ancestors on the African savanna – certainly not if suicide, depression and marital breakup statistics etc. are taken seriously.
Many thanks for tuning in!
Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series?
Please fill out this form: docs.google.com/forms/d/1mr9PIfq2ZYlQsXRIn5BcLH2onbiSI7g79mOH_AFCdIk
Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Ethereum: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
- Patreon: patreon.com/scifuture
c) Sharing the media SciFuture creates
Kind regards,
Adam Ford
- Science, Technology & the Future - #SciFuture - http://scifuture.org
(1) avoid negative extremes,
(2) increase hedonic baseline, and
(3) achieve new heights of experience.
With regards to (1): the future of consciousness is anodyne. It lacks extreme suffering in any of its guises. We will see how, if we aim right, a significant proportion of extreme suffering can be prevented with pragmatic technologies already available. Even just applying what we know today would be as significant for the reduction of suffering as the advent of anesthesia was in the context of surgery.
On (2): the future of consciousness is engaging. From novelty generation to Buddhist annealing, baseline-enhancing interventions will change the way we think of life. It is not only about making everyday fun, but also the economics of it.
And (3): the future of consciousness is ecstatic. A science of ecstasy will allow us to safely and reliably sample from a wide range of time-tested ultra-blissful peak experiences. A common cause with other sentient beings, and indeed with the interests of consciousness at large, can be forged in the knowledge of such deep experiences.
They give you a genuine, non-sentimental, reason to live. Together, action on these three levels can significantly advance the cause of eliminating suffering and engineering paradise. And our assessment is: there is a lot of low-hanging fruit in this space. Let’s pick it up!
This talk is part of the Stepping into the Future conference 2022. http://www.scifuture.org/events/stepping-into-the-future
Bio: Director of Research at Qualia Research Institute
Many thanks for tuning in!
Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series?
Please fill out this form: docs.google.com/forms/d/1mr9PIfq2ZYlQsXRIn5BcLH2onbiSI7g79mOH_AFCdIk
Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Ethereum: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
- Patreon: patreon.com/scifuture
c) Sharing the media SciFuture creates
Kind regards,
Adam Ford
- Science, Technology & the Future - #SciFuture - http://scifuture.org
A talk by David Pearce for the Stepping into the Future conference 2022. scifuture.org/events/stepping-into-the-future
A big thanks to Adam James Davies for doing the chapters!
0:00 Introduction/beginning
0:07 The Biohappiness Revolution
1:53 Paradise Engineering: when? How?
3:21 Jo Cameron and anandamide
4:30 World Health Organisation and a hundred-year plan to end suffering
5:34 Intro ends; the live presentation by David Pearce begins…
6:30 Our ancestors on the African Savannah
7:40 The daunting scale of the project ahead
8:07 ‘Flagship’ Chinese CRISPR babies and missed opportunities
9:44 Our ‘volume knobs’ for pain and nonsense mutations
12:20 Physical pain, psychological pain and Jo Cameron
14:10 Questions opening
15:07 Hugo de Garis: what is state-of-the-art within CRISPR and genetic engineering?
18:00 Nick Bostrom’s appearance on Joe Rogan’s show! The need for charismatic leadership within the suffering-abolitionist movement
19:50 Andres Gomez Emilsson: the necessity of enhancing more than one characteristic, for example: intelligence as well as hedonic set-points.
20:28 The pitfalls of enhancing intelligence
23:40 “Life in the Year 3000”, and the likelihood of nuclear war
25:03 Hugo: how ambitious should we be to begin with considering the sheer number of genes in the human genome?
26:05 Cloning super-geniuses like John von Neumann
27:20 The inherent ignorance of Turing Machines and classical digital computers
28:50 Solving the Phenomenal Binding Problem, and obscure disorders of various types rooted in the breakdowns of phenomenal binding.
31:04 Question from ? How do you test if a system has solved the Binding Problem? P-Zombies, et cetera
32:50 Strong emergence is like magic?
33:50 More on the Binding Problem and quantum mind
36:56 An appraisal of Andres and his knowledge about consciousness
37:15 More on genetic engineering, looping back round to Hugo’s last question; gene therapy’s importance role to play in ending suffering
38:29 Another question from ? Can superhappiness ‘naturally’ follow from intelligence enhancement, or vice versa?
40:30 The abolitionist project is already technically feasible for both humans and non-humans - it is not sci-fi!
42:30 successfully engineering super-intelligence might be more of a challenge than even ending suffering!
43:29 Another question from ? Can there be a formal mathematical language for philosophical and metaphysical statements?
45:35 Hugo: did Leibniz think about this problem? (No answer)
46:49 Adam Ford reminds guests and audience of the next presentation set to begin soon
47:33 Adam asks David about the mainstream normalisation of suffering-abolitionism
49:08 Adam asks David about his thoughts on Yuval Noah Harari and his ideas
50:20 Neil asks everyone, “What would it feel like to be Jo Cameron?”
50:50 Anders Sandberg’s ‘ridiculously high’ hedonic set-point, and others with similar ‘conditions’ (see ‘hyperthymia’, for example)
52:55 Andres asks, “is there an ideal state of consciousness?”
56:50 end
Many thanks for tuning in!
Kind regards,
Adam Ford
- Science, Technology & the Future - #SciFuture - http://scifuture.org
- Are we morally equipped to deal with humanities grand challenges?
- If the majority population of a democratic state were morally deficient, would it be okay to morally enhance the population, or does this cross the line (i.e. by manipulating the population’s will)?
- Who’s morals?
- Who are the ones to be morally enhanced?
- Will it be compulsory?
- Won’t taking a morality pill decrease the value of the intended morality if it skips the difficult process we normally go through to become better people?
- Shouldn’t people be concerned that use of enhancements which alter character traits might consumer’s authenticity?
- How can we alleviate aspects of the dark factor of personality (d factor) today, and in the future?
This panel was part of the Stepping Into the Future conference: http://www.scifuture.org/are-we-fit-for-the-future
Many thanks for tuning in!
Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series?
Please fill out this form: docs.google.com/forms/d/1mr9PIfq2ZYlQsXRIn5BcLH2onbiSI7g79mOH_AFCdIk
Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Ethereum: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
- Patreon: patreon.com/scifuture
c) Sharing the media SciFuture creates
Kind regards,
Adam Ford
- Science, Technology & the Future - #SciFuture - http://scifuture.org
This talk is part of the ‘Stepping Into the Future‘ conference.
http://www.scifuture.org/james-hughes-cyborg-virtues-using-bcis-for-moral-enhancement
Synopsis: Links between brain structures and cognition began with studies of victims of brain injuries, and became more precise with advances in brain imaging. In the last two decades research has demonstrated that moral emotions and cognition can be modulated with internal and external stimulation focused on particular brain structures. While non-invasive methods of neuromodulation, like transcranial direct current stimulation, are widely available for the healthy, their effects are more diffuse and uncertain. Deep brain stimulation electrodes or implanted computer chips allow more precise sensing and stimulation, but are only applicable for severe conditions such as intractable epilepsy and treatment-resistant depression. As BCIs are miniaturized and given more capacities they will be more feasible for use by those without severe disabilities. Soon hundreds or thousands of microscopic computer chips, sensors and electrodes implanted in the brain will allow real-time sensing, inhibition and boosting of thoughts and emotions, opening up morally enhancing applications. Individuals with brain disorders that lead to violence and criminality, for instance, could be offered BCI therapy as an alternative to psychiatric treatment or incarceration. This essay proposes a model of six virtues that could be targets of neuromodulation: self-control, caring, intelligence, positivity, fairness and transcendence. Key parts of the brain implicated in the functioning of each virtue are reviewed as possible targets for morally enhancing neuromodulation.
Many thanks for tuning in!
Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series?
Please fill out this form: docs.google.com/forms/d/1mr9PIfq2ZYlQsXRIn5BcLH2onbiSI7g79mOH_AFCdIk
Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Ethereum: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
- Patreon: patreon.com/scifuture
c) Sharing the media SciFuture creates
Kind regards,
Adam Ford
- Science, Technology & the Future - #SciFuture - http://scifuture.org
Pramod K. Nayar
This talk is part of the ‘Stepping Into the Future‘ conference.
http://www.scifuture.org/posthumanism-and-its-moral-imperatives-pramod-nayar
Bio: Pramod Nayar teaches M.A. courses in Literary Theory, the English Romantics and Postcolonial Literatures. His interests lie in English colonial writings on India, travel writing, Human Rights and narratives, posthumanism, postcolonial literature, Cultural Studies (celebrity studies, digital cultures) literary & cultural theory and graphic novels, with significant and regular publications in these areas.
Many thanks for tuning in!
Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series?
Please fill out this form: docs.google.com/forms/d/1mr9PIfq2ZYlQsXRIn5BcLH2onbiSI7g79mOH_AFCdIk
Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Ethereum: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
- Patreon: patreon.com/scifuture
c) Sharing the media SciFuture creates
Kind regards,
Adam Ford
- Science, Technology & the Future - #SciFuture - http://scifuture.org
John Smart is a futurist and scholar of accelerating change. He is CEO of Foresight University, founder of the Acceleration Studies Foundation, and co-founder of the Evo-Devo Universe research community, and the Brain Preservation Foundation. He is editor of Evolution, Development, and Complexity (Springer 2019), and Introduction to Foresight: Personal, Team, and Organizational Adaptiveness (Foresight U Press 2022). He is also author of The Transcension Hypothesis (2011), the proposal that universal development guides leading adaptive networks increasingly into physical and virtual inner space.
Many thanks for tuning in!
Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series?
Please fill out this form: docs.google.com/forms/d/1mr9PIfq2ZYlQsXRIn5BcLH2onbiSI7g79mOH_AFCdIk
Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Ethereum: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
- Patreon: patreon.com/scifuture
c) Sharing the media SciFuture creates
Kind regards,
Adam Ford
- Science, Technology & the Future - #SciFuture - http://scifuture.org
#FutureOfWork #JobSecurity #Automation
Many thanks for tuning in!
Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series?
Please fill out this form: docs.google.com/forms/d/1mr9PIfq2ZYlQsXRIn5BcLH2onbiSI7g79mOH_AFCdIk
Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Ethereum: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
- Patreon: patreon.com/scifuture
c) Sharing the media SciFuture creates
Kind regards,
Adam Ford
- Science, Technology & the Future - #SciFuture - http://scifuture.org
Are there facts about whether something is beautiful, or good art, or are such things purely a matter of opinion?
Post describing talk here: http://www.scifuture.org/the-aesthetic-of-the-meta-aesthetic-the-meaning-nexus-between-memeplexes-andres-gomez-emilsson
Synopsis: In the spirit of fostering a collaborative relationship between the memeplexes that currently occupy the minds of the post-political intelligentsia, Andrés shares a conceptual framework he believes is useful for sense-making independently of one’s subcultural affiliation. Namely, he will share a theory of aesthetics.
Aesthetics go much deeper than merely the preference one may have for clusters and correlations of sensorial patterns. Aesthetics, in fact, cut to the very root of our concept of identity.
Inspired by Rob Burbea’s Soulmaking, Andrés will discuss how aesthetics can be broken down into:
(1) Eros – the set of images that energize one’s thirst for life,
(2) Psyche – the network of relationships between Eros imagery, and
(3) Logos – the overarching ontology upon which Psyche and Eros are based.
Andrés discusses how these components emerge from specific philosophical background assumptions, are then adopted as social aesthetics, and ultimately risk becoming merely tribal markers. In so far as people are caught up in the dissonance between aesthetics without understanding the Logos that breaths life into them, they will continue to fight in unproductive ways. Ultimately, a careful map of the valence that an aesthetic associates to each symbol will allow us to create a music theory of aesthetics and liberate people from the burden of pointless memetic wars. Meaning that, we can predict in advance what kind of discussions are likely to break down due to different valences on key load-bearing symbols, and re-route them through a different path that nonetheless achieves the desired information processing. An understanding of how aesthetics bias our valuations would itself be an aesthetic, of course: the aesthetic of the meta-aesthetic.
This talk argues that such a meta-aesthetic could become the nexus that allows us to “get the best of each world”. The end-goal: to make aesthetic pluralism game-theoretically stable.
Many thanks for tuning in!
Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series?
Please fill out this form: docs.google.com/forms/d/1mr9PIfq2ZYlQsXRIn5BcLH2onbiSI7g79mOH_AFCdIk
Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Ethereum: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
- Patreon: patreon.com/scifuture
c) Sharing the media SciFuture creates
Kind regards,
Adam Ford
- Science, Technology & the Future - #SciFuture - http://scifuture.org
The Institute for Ethics and Emerging Technology - ieet.org
Many thanks for tuning in!
Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series?
Please fill out this form: docs.google.com/forms/d/1mr9PIfq2ZYlQsXRIn5BcLH2onbiSI7g79mOH_AFCdIk
Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Ethereum: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
- Patreon: patreon.com/scifuture
c) Sharing the media SciFuture creates
Kind regards,
Adam Ford
- Science, Technology & the Future - #SciFuture - http://scifuture.org
Further on the rights to health: "The enjoyment of the highest attainable standard of health is one of the fundamental rights of every human being without distinction of race, religion, political belief, economic or social condition." Read more of the Constitution of the World Health Organization (1948) ref: https://www.who.int/governance/eb/who_constitution_en.pdfBackground reading: en.wikipedia.org/wiki/Biohappiness
http://www.scifuture.org/the-biohappiness-revolution-david-pearce
#Biohappiness : en.wikipedia.org/wiki/Biohappiness
#HedonisticImperative : hedweb.com
#Happiness #ReducingSuffering #BiohappinessRevolution
Many thanks for tuning in!
Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series?
Please fill out this form: docs.google.com/forms/d/1mr9PIfq2ZYlQsXRIn5BcLH2onbiSI7g79mOH_AFCdIk
Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Ethereum: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
- Patreon: patreon.com/scifuture
c) Sharing the media SciFuture creates
Kind regards,
Adam Ford
- Science, Technology & the Future - #SciFuture - http://scifuture.org
overcomingbias.com/2021/06/ufos-what-the-hell.html
"Yes, the universe looks completely dead; we see no signs of life outside Earth, even though over millions of years advanced aliens could have made some big visible changes. Some possible explanations:
1. Aliens arise so rarely that the nearest ones are too far to see, or to have travelled to here,
2. Aliens are common but simply can’t travel between stars or make big visible changes,
3. Aliens are common and travel everywhere, but enforce rules against visible changes, or
4. Aliens arise rarely, but in small clumps; the first in clump to appear can control the others.
Of these, only the last two can put aliens here now, and #3 seems too much a conspiracy (i.e., coordinate to hide) theory for my tastes. But scenario #4 works, and could plausibly result from “panspermia.”
That is, simple life might have arisen on a planet Eden long ago, via a very rare event. (My research suggests this happens only once per million galaxies.) After life evolved at Eden for billions of years, a rock hit Eden, kicking up another rock that drifted for millions of years carrying life to seed our Sun’s stellar nursery. A nursery that held thousands of new stars packed close with many rocks flying around, allowing life to spread quickly to them all."
Many thanks to those who participated in the Q&A session.
Many thanks for tuning in!
Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series?
Please fill out this form: docs.google.com/forms/d/1mr9PIfq2ZYlQsXRIn5BcLH2onbiSI7g79mOH_AFCdIk
Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Ethereum: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
- Patreon: patreon.com/scifuture
c) Sharing the media SciFuture creates
Kind regards,
Adam Ford
- Science, Technology & the Future - #SciFuture - http://scifuture.org
The declassification of US military footage has rekindled a fiery feud - but before committing to a position in this debate, how can we assess the likelihood (given the existing evidence), that we are being visited by ETs with technology far superior to our own?
Anders Sandberg beamed in to make certain disclosures about Bayesian statistics applied to recent UAP/UFO 'sightings' - so let's all put on our thinking caps - and if you like yours silver, shiny and foiled that's fine too..
We also spoke about convergences in cognition and ethics which applies not only to aliens, but AI.
Background reading - UFOs: how to calculate the odds that an alien spaceship has been spotted - theconversation.com/ufos-how-to-calculate-the-odds-that-an-alien-spaceship-has-been-spotted-162269
http://www.scifuture.org/anders-sandberg-aliens-bayesians-and-blurry-footage-of-ufos
Many thanks for tuning in!
Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series?
Please fill out this form: docs.google.com/forms/d/1mr9PIfq2ZYlQsXRIn5BcLH2onbiSI7g79mOH_AFCdIk
Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Ethereum: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
- Patreon: patreon.com/scifuture
c) Sharing the media SciFuture creates
Kind regards,
Adam Ford
- Science, Technology & the Future - #SciFuture - http://scifuture.org
Talk held at AGI17 - http://agi-conference.org/2017/#AGI17 #AGI #ArtificialIntelligence #Understanding #MachineUnderstanding #CommonSence #ArtificialGeneralIntelligence #PhilMinden.wikipedia.org/wiki/Artificial_general_intelligenceMany thanks for tuning in!
Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series?
Please fill out this form: docs.google.com/forms/d/1mr9PIfq2ZYlQsXRIn5BcLH2onbiSI7g79mOH_AFCdIk
Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Ethereum: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
- Patreon: patreon.com/scifuture
c) Sharing the media SciFuture creates
Kind regards,
Adam Ford
- Science, Technology & the Future - #SciFuture - http://scifuture.org
As AI encroaches further into areas of economic usefulness where humans traditionally dominated, how might avoid uselessness and stay relevant? Merge with the machines say's Hugo.
Many thanks to Forms for the use of the track "Close" - check it out: youtube.com/watch?v=nFY0JbwrPlE | Bandcamp: soundcloud.com/forms308743226
Many thanks for tuning in!
Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Ethereum: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
- Patreon: patreon.com/scifuture
c) Sharing the media SciFuture creates
Kind regards,
Adam Ford
- Science, Technology & the Future - #SciFuture - http://scifuture.org
Why discuss this issue? Why is AI important?
Intelligence is powerful, it's a force multiplier.
Many thanks for tuning in!
Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Ethereum: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
- Patreon: patreon.com/scifuture
c) Sharing the media SciFuture creates
Kind regards,
Adam Ford
- Science, Technology & the Future - #SciFuture - http://scifuture.org
Interviewees include: James Fodor, Cameron Ashendale, Alice Knight, Rick Barker, Chris Watkins, Francesco Orsenigo, Chris Guest, Elida Radig & Sirius
Filmed at a Darwin Day picnic in Melbourne Australia.
The picnic was put on by these groups:
- Rationalist Society of Australia
- Australian Skeptics Victorian Branch
- Humanist Society of Victoria
- Progressive Atheists
darwinday.org
en.wikipedia.org/wiki/Darwin_Day
Many thanks for tuning in!
Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Ethereum: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
- Patreon: patreon.com/scifuture
c) Sharing the media SciFuture creates
Kind regards,
Adam Ford
- Science, Technology & the Future - #SciFuture - http://scifuture.org
- it's difficult to believe civs are very rare since our abiogenesis and rise to civ appeared somewhat early in the history of the universe
In sum, it is possible to estimate how far away in space and time are the nearest aliens, if one is willing to make these assumptions:
- It is worth knowing how far to grabby alien civs (GCs), even if that doesn’t tell about other alien types.
- Try-try parts of the great filter alone make it hard for any one oasis to birth an GC in 14 billion years.
- We can roughly estimate the speed at which GCs expand, and the number of hard try-try steps.
- Earth is not now within the sphere of control of a GC.
- Earth is at risk of birthing a GC soon, making today’s date a sample from GC time origin distribution.
Please also check out the other interviews with Robin Hanson on the Great Filter and Burning the Cosmic Commons.
Refs:
Great Filter: http://mason.gmu.edu/~rhanson/greatfilter.html
Try-Try or Try-Once Great Filter? overcomingbias.com/2020/12/try-try-or-try-once-great-filter.html
How Far Aggressive Aliens? Part 1: overcomingbias.com/2020/12/how-far-aggressive-aliens.html
How Far Aggressive Aliens? Part 2: overcomingbias.com/2020/12/how-far-aggressive-aliens-part-2.html
Searching for Eden: overcomingbias.com/2020/12/searching-for-eden.html
Five or six step scenario for evolution? (Brandon Carter): arxiv.org/abs/0711.1985
#GrabbyAliens #ExpansionistAliens
Burning the cosmic commons:
- Video interview: youtube.com/watch?v=wLmhXE2e1bY
- Paper: http://mason.gmu.edu/~rhanson/filluniv.pdf
#BurningCosmicCommons
The Great Filter:
- Video interview: youtube.com/watch?v=zGXpsJYNILg
- Wikipedia: en.wikipedia.org/wiki/Great_Filter
#Great Filter
Many thanks for tuning in!
Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Ethereum: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
- Patreon: patreon.com/scifuture
c) Sharing the media SciFuture creates
Kind regards,
Adam Ford
- Science, Technology & the Future - #SciFuture - http://scifuture.org
01:08 Science Communication & the information deficit explanation of public hostility / skepticism towards science
05:11 The tension between sober-mindedness in science and the need to translate science to the public in an engaging way. Hype and exaggeration in science.
08:40 The End of Science - has science reached an era of diminishing returns?
11:13 Is it possible for science to describe things like the abiogenesis of life on earth or consciousness?
15:34 Evidence of fossilized microbes from mars?
17:44 The Great Filter? Why can’t we see any evidence of other galactic expansionist alien civs?
20:50 Humans seeming need to fill in the gaps with mystical or highly speculative explanations. Epistemic humility.
25:17 Covering an AI Ethics conference
30:22 Can we automate ethics?
32:05 The term singularity
35:15 Basic AI Drives
36:33 What would SI do once it had enough resources?
40:20: Progress in AI. Was part of the reason for an AI winter the focus on symbolic AI? AI as a project to understand ourselves.
47:09 Do we need sentient machines to achieve highly capable machines?
49:08 Can consciousness be measured? Integrated Information Theory (IIT) and panpsychism.
53:20 Solipsism and epistemic humility
57:24 The Hogan sisters and split brain experiments
01:03:01 John Horgan questions his own ‘End of Science’ thesis
01:01:12 Perhaps we need some kind of machine cognition to make certain kinds of progress in science. Denis Overby thinks AI will make progress with a grand unified theory.
01:01:53 AlphaFold - protein folding prediction
01:03:28 AI as a black box - extraordinarily powerful opaque AIs & the replication crisis in AI
01:07:44 Pay Attention: Sex, Death, and Science
01:09:43 A book on the horizon on quantum mechanics
About John Horgan: http://www.johnhorgan.orghttps://twitter.com/Horganismhttps://en.wikipedia.org/wiki/John_Horgan_(journalist)Articles by John Horgan: scientificamerican.com/author/john-horgan7/blogs.scientificamerican.com/cross-check/seeing-the-miracle-of-existence-in-the-darkest-of-times/blogs.scientificamerican.com/cross-check/donald-trump-and-the-problem-of-evil/blogs.scientificamerican.com/cross-check/what-would-a-machine-as-smart-as-god-want/blogs.scientificamerican.com/cross-check/dear-anti-trump-protestors-please-renounce-violence
#TheEndOfScience #PayAttention #AI
Many thanks for tuning in!
Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Ethereum: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
- Patreon: patreon.com/scifuture
c) Sharing the media SciFuture creates
Kind regards,
Adam Ford
- Science, Technology & the Future - #SciFuture - http://scifuture.org
We also discuss if AI can help with the AI safety problem itself - in relation to whether AI can understand.
Documentary - Spillover — Zika, Ebola & Beyond: pbs.org/spillover-zika-ebola-beyond/home
Book 'Our Final Invention' : https://www.amazon.com.au/Our-Final-Invention-James-Barrat/dp/0312622376https://en.wikipedia.org/wiki/James_BarratMany thanks for tuning in!
#AISafety #OurFinalInvention #FriendlyAI
Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Ethereum: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
- Patreon: patreon.com/scifuture
c) Sharing the media SciFuture creates
Kind regards,
Adam Ford
- Science, Technology & the Future - #SciFuture - http://scifuture.org
Many thanks for watching!
- Support me via Patreon: patreon.com/scifuture
- Please Subscribe to this Channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
- Science, Technology & the Future website: http://scifuture.org
The Hedonistic Imperative - hedweb.com
Many thanks for tuning in!
Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Ethereum: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
- Patreon: patreon.com/scifuture
c) Sharing the media SciFuture creates
Kind regards,
Adam Ford
- Science, Technology & the Future - #SciFuture - http://scifuture.org
http://youthereum.ca
Yuri has a track record of not only raising over $20 million for his previous ventures but also initiating and overseeing 4 clinical trials and several preclinical studies, including studies in transgenic mice.
At Youthereum Genetics, Yuri is currently leading a project dedicated to developing an epigenetic rejuvenation gene therapy, as intermittent epigenetic partial reprogramming demonstrated great experimental results in mice: it extended their lifespan by up to 50%.
His life goal is to do everything possible to minimize human suffering from various diseases, especially terminal age-related diseases such as cancer, Alzheimer’s, and cardiovascular disease and to help humanity eradicate them. As an activist, blogger, and speaker, he is conveying the magnitude of human suffering these diseases cause, as they take over 100,000 lives each day. As a biotech entrepreneur, Yuri is doing his modest part by putting together projects that could yield such therapies, splitting his time between Toronto and Moscow.
He believes that one day humanity will cure all such diseases, and he wants to do whatever he can to hasten that day.
Since 2013, Yuri also serves as the Vice President of the nonprofit Foundation, Science for Life Extension, whose goal is the popularization of the fight against age-related diseases. To further this cause, Yuri frequently blogs, speaks, writes op-ed pieces, and participates in various TV and radio shows. At the Science for Life Extension Foundation, Yuri is helping the Foundation create and implement social change strategies to create public awareness that aging is a curable disease. He is also working on initiating intergovernmental dialog and public hearings about including aging in WHO’s ICD-11.
Previously, Yuri was the COO and Managing Director at Pharma Bio in Moscow for almost 7 years. From 2015 to 2017, Yuri was the Vice President of Business Development at Manus Pharmaceuticals in Toronto, Canada where he worked on raising funding and forming strategic partnerships to develop breakthrough peptide compounds aimed at preventing Alzheimer’s disease. Before that, he was the VP of Business Development at Peptos Pharma in Moscow.
twitter.com/ydeigin
#Rejuvenation #AntiAging #UndoingAging
Many thanks for tuning in!
Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Ethereum: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
- Patreon: patreon.com/scifuture
c) Sharing the media SciFuture creates
Kind regards,
Adam Ford
- Science, Technology & the Future - #SciFuture - http://scifuture.org
Alexander Fedintsev is a scientist and machine learning engineer. His scientific background lies in the field of bioinformatics, statistics, and machine learning. Alexander earned his M.S. in computer science from the National Research University "Moscow Power Engineering Institute".
Alexander worked in the Institute of Antimicrobial Chemotherapy as a bioinformatician. He also collaborated with professor Alexey Moskalev's lab on aging research. After quitting academia, Alexander switched to machine learning engineering however he continued collaborating on aging research with professor Moskalev.
He developed a highly accurate non-invasive biomarker of aging based on markers of the cardiovascular system. Now his research interest is mainly focused on the role of extracellular matrix (ECM) in the aging process. He and professor Moskalev recently suggested treating non-enzymatic modifications of long-living proteins (mostly, in the ECM) as a 10th hallmark of aging.
Many thanks for tuning in!
Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Ethereum: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
- Patreon: patreon.com/scifuture
c) Sharing the media SciFuture creates
Kind regards,
Adam Ford
- Science, Technology & the Future - #SciFuture - http://scifuture.org
02:33 extended cognition vs extended consciousness
03:57 Does every part of our modular brain contribute to consciousness?
05:04 Disagreement on the meaning of the word 'cognition'
07:25 Wittgenstein - the meaning of a word is it's use
08:51 Shared grounding in a hive?
10:40 Extending reach vs extending hands or arms
12:34 GPT3 as a writing aid. Chatting with dead philosophers
16:30 Lost in the hermeneutic hall of mirrors
19:59 A metaphor for symbol grounding?
21:54 Further on what symbol grounding is.. categories from propositions - affording the capacity to acquire and transmit grounded categories through language (instead of by trial and error)
25:34 Artificial life - mushroom (toy) example of symbol grounding
32:30 Grounding categories are not just concrete objects, but can be abstract concepts (i.e. sharpness)
33:42 GPT3 Goethe revisited.
37:22 Vanishing intersections. Chompsky's work on universal grammar (syntax) & the poverty of stimulus.
48:34 How do humans learn categories? 3 ways: supervised, unsupervised and through language
52:28 What happens when we find evidence to dis-confirm something we have already learned? (i.e. a black swan)
55:46 Natural kinds
59:05 Natural kinds by transition?
02:15 Essences and transitions
05:48 How much of our current behavior have homolouges in the ancestral environment?
10 Selective behavior - example of the peacock's tail that controls for cheating
12:15 Paraciticm
14:13 To what degree did our capacity for generalization come before grammar?
14:48 Learning, Skinner boxes, abstraction and categorization. Is (category) learning itself is the mother of all generalizations?
18:58 Dictionaries & Grounding sets
19:53 What is meant by a dictionary? The nucleus (can define all words inside and outside itself), the core (can define all inside itself), satellites (around the core, tiny clusters of words). Minimal grounding sets are not dictionaries - they can only define words outside of itself, it is also the smallest amount of words which can define all the rest of the words by combinations of words - turns out there are many of them - so finding the ultimate grounding set is an N-P complete problem (using a directed graph). All grounding sets are part core and part satellite. The size of the minimum grounding set is between 750 to 1500 words.
30:48 Grounding sets, and the dictionary game.
35:03 The easy and the hard problems of consciousness. Why is the hard problem of consciousness hard?
39:56 Shared cognition? Shared experience? The case of the Hogan sisters, Siamese twins conjoined at the head - sharing parts of their brains.
44:21 The identity of indiscernibles
48:09 Renounce ontology! Psychologists Stevan Harnad is a naive realists, he is interested in what organisms can do, and how they feel.
49:28 Feeling primitives - can feels be reduced to monads or are they complex?
Many thanks for tuning in!
Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Ethereum: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
- Patreon: patreon.com/scifuture
c) Sharing the media SciFuture creates
Kind regards,
Adam Ford
- Science, Technology & the Future - #SciFuture - http://scifuture.org
Pew Research 'The Future of Truth & Misinformation Online' : pewresearch.org/internet/2017/10/19/the-future-of-truth-and-misinformation-online
“The power and diversity of very low-cost technologies allowing unsophisticated users to create believable ‘alternative facts’ is increasing rapidly. It’s important to note that the goal of these tools is not necessarily to create consistent and believable alternative facts, but to create plausible levels of doubt in actual facts. The crisis we face about ‘truth’ and reliable facts is predicated less on the ability to get people to believe the wrong thing as it is on the ability to get people to doubt the right thing. The success of Donald Trump will be a flaming signal that this strategy works, alongside the variety of technologies now in development (and early deployment) that can exacerbate this problem. In short, it’s a successful strategy, made simpler by more powerful information technologies.”
We speak a bit about AI language modelling, the fact that it doesn't understand stuff, and the possibility of creating AI that actually understands stuff. Highlighted in the news recently language modelling (i.e. GPT3) is being used to help generate fake news.. Interestingly 'a college student used GPT-3 to write fake blog posts and ended up at the top of Hacker News': theverge.com/2020/8/16/21371049/gpt3-hacker-news-ai-blog
Frameworks for helping understand wickedly complex information - VUCA (Volatile, Uncertain, Complex, and Ambiguous) vs BANI (Brittle, Anxious, Nonlinear, and Incomprehensible):
Jamais’ BANI piece on Medium “Facing the Age of Chaos":
medium.com/@cascio/facing-the-age-of-chaos-b00687b1f51d
Quote: “There has always been uncertainty and complexity in the world, and we have devised reasonably effective systems to figure out and adapt to this everyday disorder. From weighty institutions like “law” and “religion” to habituated norms and values, even to ephemeral business models and political strategies, much of what we think of as composing “civilization” is ultimately a set of cultural implements that allow us to domesticate change. If we can make disruptive processes understandable, we hope, maybe we can keep their worst implications in check.”
And on BANI “It doesn’t have to be that way. The BANI framework offers a lens through which to see and structure what’s happening in the world. At least at a surface level, the components of the acronym might even hint at opportunities for response: brittleness could be met by resilience and slack; anxiety can be eased by empathy and mindfulness; nonlinearity would need context and flexibility; incomprehensibility asks for transparency and intuition. These may well be more reactions than solutions, but they suggest the possibility that responses can be found.”
The Institute for the Future:
- Digital Intelligence Lab: iftf.org/partner-with-iftf/research-labs/digital-intelligence-lab
- The Human Consequences of Computational Propaganda: iftf.org/disinfoeffects
#FakeNews #AlternativeFacts #Misinformation
Jamais Cascio is a distinguished fellow at the Institute for the Future: iftf.org
Many thanks for tuning in!
Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Ethereum: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
- Patreon: patreon.com/scifuture
c) Sharing the media SciFuture creates
Kind regards,
Adam Ford
- Science, Technology & the Future - #SciFuture - http://scifuture.org
Bio: en.wikipedia.org/wiki/James_Hughes_(sociologist)
Discussion points:
00:28 Demonstrations on counting every vote #everyvotecounts
01:25 Contentious issues in #politics - de-fund the police & socialism
12:43 Strong man politics and the attraction to narcissism #toxicmasculinity
15:20 The appointment of Amy Coney Barret
23:18 Appealing to transhumanist or more sophisticated trump supporters - how?
29:16 Ivanka vs Donald as the next republican candidate? Dog whistling vs bull horning
31:08 Will Trump try for 2024?
33:30 A progressive agenda and a republican senate
39:39 Politics & transhumanist goals
46:04 Distrust in the scientific enterprise & science in general. Conspiracy theories
48:06 An update on what IEET are doing
53:09 Contrasting positions of Biden and Trump - Unity vs Division
59:41 Kamala Harris may take over from Biden as president by 2024
03:09 Fake news and emerging technology like #GPT3 & #deepfakes
10:02 Things to do over the next 4 years and beyond to achieve technoprogressive / transhumanist goals
Many thanks for tuning in!
Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Ethereum: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
- Patreon: patreon.com/scifuture
c) Sharing the media SciFuture creates
Kind regards,
Adam Ford
- Science, Technology & the Future - #SciFuture - http://scifuture.org
Part 2 is here: youtube.com/watch?v=BFLqy1PvfLo
Many thanks for tuning in!
Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Ethereum: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
- Patreon: patreon.com/scifuture
c) Sharing the media SciFuture creates
Kind regards,
Adam Ford
- Science, Technology & the Future - #SciFuture - http://scifuture.org
00:06 On the urgency of reducing suffering. Is ethics merely an aesthetic?
01:29 Classical utilitarianism. Are happiness and suffering morally symmetrical?
04:48 Negative utilitarianism
05:32 Pin prick arguments
11:37 The problem of other minds, skepticism and the Turing test (chat-bot vs robotic Turing tests)
19:18 GPT3 & Symbol Grounding - while a great text generation, it doesn't understand stuff.
27:17 Stevan Harnad's personal reasons for first becoming a vegetarian and later a vegan
31:48 Are there health issues with being a vegan?
32:21 Informing and sensitizing people about ethical food consumption
33:17 Are there nutritional benefits only found in animal products?
37:22 The ethics of animal experimentation. Conflicts of moral vital interests.
39:17 Covid19 - the cause of almost all pandemics is zoonotic - where humans force animals into each others habitats making it easier for pathogens to jump between species (inc humans)
44:48 Pets
49:45 Clean meat - duplicate the taste without the suffering.
51:36 Strategies for reducing animal suffering in industry: 1) Sensitize to horrors and non-necessity, 2) develop clean meat or alternatives to meat 3) scare tactics (i.e. regarding pandemics and environmental issues)
54:32 Disinformation campaigns in the meat industry - Ag-gag en.wikipedia.org/wiki/Ag-gag
58:01 GPT3 a tool for generating fake news? #GPT3 can't understand stuff without symbol grounding.
#AnimalWelfare #Ethics #SymbolGrounding #FakeNews #aggag
References
"Other bodies, other minds: A machine incarnation of an old philosophical problem" Dr Stevan Harnad - philpapers.org/rec/HAROBO-2
'Other Minds' - Plato Stanford: https://plato.stanford.edu/entries/other-minds/
'The other-minds problem in other species' - http://generic.wordpress.soton.ac.uk/skywritings/2018/08/09/otherminds
'Taking Animal Sentience Seriously' (Interview with Stevan Harnad) - psychologytoday.com/au/blog/science-and-philosophy/202003/taking-animal-sentience-seriously
Many thanks for tuning in!
Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Ethereum: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
- Patreon: patreon.com/scifuture
c) Sharing the media SciFuture creates
Kind regards,
Adam Ford
- Science, Technology & the Future - #SciFuture - http://scifuture.org
01:14 Is #GPT3 on the direct path to #AGI?
04:37 Interesting and crazy output of GPT3 - Conjuring Philip K Dick through transformer neural net experimentation
09:26 Faking understanding .. Propensity for GPT3 or other transformer ANNs to produce gibberish some of the time reduces practical real world use.
13:16 GPT3 training data contains distillations of human understanding. Difficulties in developing generative document summarizers.
15:33 Occam's Razor & whether adding vastly more amounts of parameters make a remarkable difference in transformer network capability
23:46 Transformer models in music
27:13 What's missing in AI? Symbol grounding and abstract representation
30:34 Minimum requirements for symbol grounding in AGI - need for systems that can generate compact abstract representations
34:57 Paper: Symbol Grounding via Chaining of Morphisms arxiv.org/abs/1703.04368
39:52 Paper: Grounding Occam's Razor in a Formal Theory of Simplicity arxiv.org/abs/2004.05269
46:12 OpenCog Hyperon wiki.opencog.org/w/Hyperon
50:44 What is meaning? Are compact abstract representations required for meaning generation?
54:51 What are symbols? How are they represented in transformer networks? How would they ideally be represented in an AGI system?
59:08 Understanding, compression and Occam's Razor - and the need for compact abstract representations in order to achieve generalization
1:03:08 Integrating large transformer ANNs - a modular approach
1:08:43 Proto transfer learning using concise abstract representations
1:12:15 What's missing in AI atm? What's on the horizon?
1:14:43 Other AGI projects - "Replicode: A Constructivist Programming Paradigm and Language" - Kristinn R. Thórisson: zenodo.org/record/7009
1:14:43 Graph processing units are here (the singularity must be near!)
1:20:28 Why people think it's impossible to achieve AGI this century
1:24:46 The prospect of living to see AGI occur
1:26:04 Superintelligent singleton hard takeoffs and race conditions between competing AGI projects
1:28:49 Centralized AGI development vs it being in the hands of a teaming mass of unorganized humans
1:30:14 The Trump/Biden presidential elections
1:31:28 Looking forward to an AGI 'RObama' run government
#OccamsRazor #AI #Superintelligence
Many thanks for tuning in!
Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Ethereum: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
- Patreon: patreon.com/scifuture
c) Sharing the media SciFuture creates
Kind regards,
Adam Ford
- Science, Technology & the Future - #SciFuture - http://scifuture.org
02:40 What's missing in AI atm? Unified coherent model of reality
04:14 AI systems like GPT-3 behave as if they understand - what's missing?
08:35 Symbol grounding - does GPT-3 have it?
09:35 GPT-3 for music generation, GPT-3 for image generation, GPT-3 for video generation
11:13 GPT-3 temperature parameter. Strange output?
13:09 GPT-3 a powerful tool for idea generation
14:05 GPT-3 as a tool for writing code. Will GPT-3 spawn a singularity?
16:32 Increasing GPT-3 input context may have a high impact
16:59 Identifying grammatical structure & language
19:46 What is the GPT-3 transformer network doing?
21:26 GPT-3 uses brute force, not zero-shot learning, humans do ZSL
22:15 Extending the GPT-3 token context space. Current Context = Working Memory. Humans with smaller current contexts integrate concepts over long time-spans
24:07 GPT-3 can't write a good novel
25:09 GPT-3 needs to become sensitive to multi-modal sense data - video, audio, text etc
26:00 GPT-3 a universal chat-bot - conversations with God & Johann Wolfgang von Goethe
30:14 What does understanding mean? Does it have gradients (i.e. from primitive to high level)?
32:19 (correlation vs causation) What is causation? Does GPT-3 understand causation? Does GPT-3 do causation?
38:06 Deep-faking understanding
40:06 The metaphor of the Golem applied to civ
42:33 GPT-3 fine with a person in the loop. Big danger in a system which fakes understanding. Deep-faking intelligible explanations.
44:32 GPT-3 babbling at the level of non-experts
45:14 Our civilization lacks sentience - it can't plan ahead
46:20 Would GTP-3 (a hopfield network) improve dramatically if it could consume 1 to 5 trillion parameters?
47:24 GPT3: scaling up a simple idea. Clever hacks to formulate the inputs
47:41 Google GShard with 600 billion input parameters - Amazon may be doing something similar - future experiments
49:12 Ideal grounding in machines
51:13 We live inside a story we generate about the world - no reason why GPT-3 can't be extended to do this
52:56 Tracking the real world
54:51 MicroPsi
57:25 What is computationalism? What is it's relationship to mathematics?
59:30 Stateless systems vs step by step Computation - Godel, Turing, the halting problem & the notion of truth
1:00:30 Truth independent from the process used to determine truth. Constraining truth that which can be computed on finite state machines
1:03:54 Infinities can't describe a consistent reality without contradictions
1:06:04 Stevan Harnad's understanding of computation
1:08:32 Causation / answering 'why' questions
1:11:12 Causation through brute forcing correlation
1:13:22 Deep learning vs shallow learning
1:14:56 Brute forcing current deep learning algorithms on a Matrioshka brain - would it wake up?
1:15:38 What is sentience? Could a plant be sentient? Are eco-systems sentient?
1:19:56 Software/OS as spirit - spiritualism vs superstition. Empirically informed spiritualism
1:23:53 Can we build AI that shares our purposes?
1:26:31 Is the cell the ultimate computronium? The purpose of control is to harness complexity
1:31:29 Intelligent design
1:33:09 Category learning & categorical perception: Models - parameters constrain each other
1:35:06 Surprise minimization & hidden states; abstraction & continuous features - predicting dynamics of parts that can be both controlled & not controlled, by changing the parts that can be controlled. Categories are a way of talking about hidden states.
1:37:29 'Category' is a useful concept - gradients are often hard to compute - so compressing away gradients to focus on signals (categories) when needed
1:38:19 Scientific / decision tree thinking vs grounded common sense reasoning
1:40:00 Wisdom/common sense vs understanding. Common sense, tribal biases & group insanity. Self preservation, dunbar numbers
1:44:10 Is g factor & understanding two sides of the same coin? What is intelligence?
1:47:07 General intelligence as the result of control problems so general they require agents to become sentient
1:47:47 Solving the Turing test: asking the AI to explain intelligence. If response is an intelligible & testable implementation plan then it passes?
1:49:18 The term 'general intelligence' inherits it's essence from behavioral psychology; a behaviorist black box approach to measuring capability
1:52:15 How we perceive color - natural synesthesia & induced synesthesia
1:56:37 The g factor vs understanding
1:59:24 Understanding as a mechanism to achieve goals
2:01:42 The end of science?
2:03:54 Exciting currently untestable theories/ideas (that may be testable by science once we develop the precise enough instruments). Can fundamental physics be solved by computational physics?
2:07:14 Quantum computing. Deeper substrates of the universe that runs more efficiently than the particle level of the universe?
2:10:05 The Fermi paradox
2:12:19 Existence, death and identity construction
- Josh's Position: Josh takes the posittion in this debate that the rise of artificial intelligence levels will create a utopia for humanity.
- Hugo's Position: Hugo takes the opposite position, namely that the rise of godlike massively intelligent machines will be catastrophic for humanity, leading to the worst, most passionate war humanity has ever known, using late 21st century weapons, killing billions of people.
Recorded at the AGI-09 (the 2nd conf on AGI) held in March 6-9 2009 : http://agi-conference.org/2009 by Jeriaska: vimeo.com/jeriaskaMany thanks for watching!
Josh Hall Bio: en.wikipedia.org/wiki/J._Storrs_Hall
Hugo de Garis Bio: en.wikipedia.org/wiki/Hugo_de_Garis
Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Ethereum: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
- Patreon: patreon.com/scifuture
c) Sharing the media SciFuture creates: http://scifuture.org
Kind regards,
Adam Ford
- Science, Technology & the Future
00:11 The concept of understanding under-recognised as an important aspect of developing AI
00:44 Re-framing perspectives on AI - the Chinese Room argument - and how can consciousness or understanding arise from billions of seemingly discreet neurons firing? (Should there be a binding problem of understanding similar to the binding problem of consciousness?)
04:23 Is there a difference between generality in intelligence and understanding? (and extentionally between AGI and artificial understanding?)
05:08 Ah Ha! moments - where the penny drops - what's going on when this happens?
07:48 Is there an ideal form of understanding? Coherence & debugging - ah ha moments
10:18 Webs of knowledge - contextual understanding
12:16 Early childhood development - concept formation and navigation
13:11 The intuitive ability for concept navigation isn't complete
Is the concept of understanding a catch all?
14:29 Is it possible to develop AGI that doesn't understand? Is generality and understanding the same thing?
17:32 Why is understanding (the nature of) understanding important?
Is understanding reductive? Can it be broken down?
19:52 What would be the most basic primitive understanding be?
22:11 If (strong) AI is important, and understanding is required to build (strong) AI, what sorts of things should we be doing to make sense of understanding?
Approaches - engineering, and copy the brain
24:34 Is common sense the same thing as understanding? How are they different?
26:24 What concepts do we take for granted around the world - which when strong AI comes about will dissolve into illusions, and then tell us how they actually work under the hood?
27:40 Compression and understanding
29:51 Knowledge, Gettier problems and justified true belief. Is knowledge different from understanding and if so how?
31:07 A hierarchy of intel - data, information, knowledge, understanding, wisdom
33:37 What is wisdom? Experience can help situate knowledge in a web of understanding - is this wisdom? Is the ostensible appearance of wisdom necessarily wisdom? Think pulp remashings of existing wisdom in the form of trashy self-help literature.
35:38 Is understanding mapping knowledge into a useful framework? Or is it making accurate / novel predictions?
36:00 Is understanding like high resolution carbon copy like models that accurately reflect true nature or a mechanical process?
37:04 Does understanding come in gradients of topologies? Is there degrees or is it just on or off?
38:37 What comes first - understanding or generality?
40:47 Minsky's 'Society of Mind'
42:46 Is vitalism alive in well in the AI field? Do people actually think there are ghosts in the machines?
48:15 Anthropomorphism in AI literature
50:48 Deism - James Gates and error correction in super-symmetry
52:16 Why are the laws of nature so mathematical? Why is there so much symmetry in physics? Is this confusing the map with the territory?
52:35 The Drake equation, and the concept of the Artilect - does this make Deism plausible? What about the Fermi Paradox?
55:06 Hyperintelligence is tiny - the transcention hypothesis - therefore civs go tiny - an explanation for the fermi paradox
56:36 Why would *all* civs go tiny? Why not go tall, wide and tiny? What about selection pressures that seem to necessitate cosmic land grabs?
01:01:52 The Great Filter and the The Fermi Paradox
01:02:14 Is it possible for an AGI to have a deep command of knowledge across a wide variety of topics/categories without understanding being an internal dynamic? Is the turing test good enough to test for understanding? What kinds of behavioral tests could reliably test for understanding? (Of course without the luxury of peering under the hood)
01:03:09 Does AlphaGo understand Go, or DeepBlue understand chess? Revisiting the Chinese Room argument.
01:04:23 More on behavioral tests for AI understanding.
01:06:00 Zombie machines - David Chalmers Zombie argument
01:07:26 Complex enough algorithms - is there a critical point of complexity beyond which general intelligence likely emerges? Or understanding emerges?
01:08:11 Revisiting behavioral 'turing' tests for understanding
01:13:05 Shape sorters and reverse shape sorters
01:14:03 Would slightly changing the rules of Go confuse AlphaGo (after it had been trained)? Need for adaptivity - understanding concept boundaries, predicting where they occur, and the ability to mine outwards from these boundaries...
01:15:11 Neural nets and adaptivity
01:16:41 AlphaGo documentary - worth a watch. Progresses in AI challenges human dignity which is a concern, but the DeepMind and the AlphaGo documentary seemed to be respectful. Can we manage a transition from human labor to full on automation while preserving human dignity?
Filmed in the Dandenong Ranges in Victoria, Australia.
Many thanks for watching!
0:00 Intro / What got Pramod interested in posthuman studies?
04:16 Defining the terms - what is posthumanism? Cultural framing of natural vs unnatural. Posthumanism is not just bodily or mental enhancement, but involves changing the relationship between humans, non-human lifeforms, technology and non-living matter. Displacement of anthropocentrism.
08:01 Anthropocentric biases inherited from enlightenment humanist thinking and human exceptionalism. The formation of the transhumanist declaration with part of it focusing on the human perspective, with point 7 of the declaration focusing on the well-being of all sentience. The important question of empathy - not limiting it to the human species. The issue of empathy being a good lunching pad for further conversations between the transhumists and the posthumanists.
humanityplus.org/philosophy/transhumanist-declaration
11:10 Difficulties in getting everyone to agree on cultural values. Is a utopian ideal posthumanist/transhumanist society possible?
13:25 Collective societies, hive minds, borganisms. Distributed cognition, the extended mind hypothesis, cognitive assemblages, traditions of knowledge sharing.
16:58 Does the humanities need some form of reconfiguration to shift it towards something beyond the human? Rejecting some of the value systems that enlightenment humanism claimed to be universal. Julian Savulescu's work on moral enhancement.
20:58 Colonialism - what is it?
21:57 Aspects of enlightenment humanism that the critical posthumanists don't agree with. But some believe the poshumanists to be enlightenment haters that reject rationality - is this accurate?
24:33 Trying to achieve agreement on shared human values - is vulnerability rather than dignity a usable concept that different groups can agree with?
26:37 The idea of the monster - peoples fear of what they don't understand. Thinking past disgust responses to new wearable technologies and more radical bodily enhancements.
29:45 The future of posthuman morphology and posthuman rights - how might emerging means of upgrading our bodies / minds interfere with rights or help us re-evaluate rights?
33:42 Personhood beyond the human.
35:11 Should we uplift non-human animals? Animals as moral patients becoming moral actors through uplifting? Also once Superintelligent AI is developed, should it uplift us? The question of agency and aspiration - what are appropriate aspirations for different life forms? Species enhancement and Ian Hacking's idea of 'Making up people' - classification and how people come to inhabit the identities that exist at various points in history, or in different environments.
lrb.co.uk/the-paper/v28/n16/ian-hacking/making-up-people
38:10 Measuring happiness - David Pearce's idea of eliminating suffering and increasing happiness through advanced technology. What does it mean to have welfare or to flourish? Should we institutionalise wellbeing, a gross domestic happiness, world happiness index?
40:27 Anders Sandberg asks: Transhumanism and posthumanism often do not get along - transhumanism commonly wears its enlightenment roots on its sleeve, and posthumanism often spends more time criticising the current situation than suggesting an out of it. Yet there is no fundamental reason both perspectives could not simultaneously get what they want: a post-human posthumanist concept of humanity and its post-natural environment seem entirely possible. What is Nayar's perspective on this win-win vision?
44:14 The postmodern play of endless difference and relativism - what is the good and bad of postmodernism on posthumanist thinking?
47:16 What does postmodernism have to offer both posthumanism and transhumanism?
49:17 Thomas Kuhn's idea of paradigm changes in science happening funeral by funeral.
58:58 - How has the idea of the singularity influenced transhumanist and posthumanist thinking? Shift's in perspectives to help us ask the right questions in science, engineering and ethics in order to achieve a better future society.
1:01:55 - What AI is good and bad at today. Correlational thinking vs causative thinking. Filling the gaps as to what's required to achieve 'machine understanding'.
1:03:26 - Influential literature on the idea of the posthuman - especially that which can help us think about difference and 'the other' (or the non-human) (Octavia Butler, James Hughes, Anders Sandberg, Gary Harper, Julian Savulescu, Mark Tenanbaum)
Many thanks for watching!
Consider supporting SciFuture
STELARC – CONTINGENT AND CONTESTABLE FUTURES: DIGITAL NOISE, GLITCHES & CONTAMINATIONS
Event put on in Melbourne late 2019: http://www.scifuture.org/event-stelarc-contingent-contestable-futures
BRIEF BIOGRAPHICAL NOTES
Stelarc experiments with alternative anatomical architectures. His performances incorporate Prosthetics, Robotics, VR and Biotechnology. He is presently surgically constructing and augmenting an ear on his arm. In 1996 he was made an Honorary Professor of Art and Robotics, Carnegie Mellon University and in 2002 was awarded an Honorary Doctorate of Laws by Monash University. In 2010 he was awarded the Ars Electronica Hybrid Arts Prize. In 2015 he received the Australia Council’s Emerging and Experimental Arts Award. In 2016 he was awarded an Honorary Doctorate from the Ionian University, Corfu. His artwork is represented by Scott Livesey Galleries,
Melbourne. www.stelarc.org
Many thanks for watching!
Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Ethereum: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
- Patreon: patreon.com/scifuture
c) Sharing the media SciFuture creates: http://scifuture.org
Kind regards,
Adam Ford
- Science, Technology & the Future
Speakers: Michelle Chayeb and Hiwot Kelemwok
This presentation was held at H+ @Melbourne 2011 hosted by #SciFuture.
Humanity+ @Melbourne 2011 Conference Website (archived): web.archive.org/web/20131106021014/http://au.humanityplus.org/conference
Many thanks for watching!
Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Ethereum: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
- Patreon: patreon.com/scifuture
c) Sharing the media SciFuture creates: http://scifuture.org
Kind regards,
Adam Ford
- Science, Technology & the Future
The panel was held at H+ @Melbourne 2011, hosted by #SciFuture.
Humanity+ @Melbourne 2011 Conference Website (archived): web.archive.org/web/20131106021014/http://au.humanityplus.org/conference #hplus #humanityplus
Many thanks for watching!
Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Ethereum: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
- Patreon: patreon.com/scifuture
c) Sharing the media SciFuture creates: http://scifuture.org
Kind regards,
Adam Ford
- Science, Technology & the Future
au.linkedin.com/in/andrewjamesperry
The talk was given at H+ @Melbourne 2011 hosted by #SciFuture.
Humanity+ @Melbourne 2011 Conference Website (archived): web.archive.org/web/20131106021014/http://au.humanityplus.org/conference
#transhumanism #igem #biotech #humanityplus
Many thanks for watching!
Consider supporting #SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Ethereum: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
- Patreon: patreon.com/scifuture
c) Sharing the media SciFuture creates: http://scifuture.org
Kind regards,
Adam Ford
- Science, Technology & the Future
Facilitated by Will Donovan and Jeremy Nagel
This rapid prototying exercise was facilitated at H+ @Melbourne 2011 hosted by #SciFuture.
Humanity+ @Melbourne 2011 Conference Website (archived): web.archive.org/web/20131106021014/http://au.humanityplus.org/conference
Many thanks for watching!
Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Ethereum: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
- Patreon: patreon.com/scifuture
c) Sharing the media SciFuture creates: http://scifuture.org
Kind regards,
Adam Ford
- Science, Technology & the Future
Most of his career has involved commercial R&D, typically with emerging technologies. Usually, staying one step ahead and making technology choices that are on the cusp of becoming mainstream. Often, undertaking system architecture and technical team leader roles on projects that combine software and hardware.
Specialties: System architecture, technical team lead, distributed systems and embedded systems prototyping, design and implementation. Current focus: Drones, robotics, A.I / Machine Learning, video processing
#IoT #HackerSpace #HumanityPlus
Similar slides:
- slideshare.net/geekscape/internet-of-things
- slideshare.net/geekscape/internet-of-things-smart-energy-groups
- slideshare.net/geekscape/internet-of-things-smart-energy-groups
au.linkedin.com/in/geekscape
http://github.com/geekscape/Aiko
This presentation was held at H+ @Melbourne 2011 hosted by #SciFuture.
Humanity+ @Melbourne 2011 Conference Website (archived): web.archive.org/web/20131106021014/http://au.humanityplus.org/conference
Many thanks for watching!
Consider supporting #SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Ethereum: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
- Patreon: patreon.com/scifuture
c) Sharing the media SciFuture creates: http://scifuture.org
Kind regards,
Adam Ford
- Science, Technology & the Future