Science, Technology & the FutureFurther discussion with Stevan Harnad on Symbol Grounding - including discussion on Extended Cognition, consciousness, meaning, hive minds, GPT-3, a-life, grounding categories, vanishing intersections, natural kinds, essences & transitions, evolutionary psychology, learning & skinner boxes, dictionaries & grounding sets, shared cognition, the identity of indiscernibles, ontology and feeling primitives.
02:33 extended cognition vs extended consciousness 03:57 Does every part of our modular brain contribute to consciousness? 05:04 Disagreement on the meaning of the word 'cognition' 07:25 Wittgenstein - the meaning of a word is it's use 08:51 Shared grounding in a hive? 10:40 Extending reach vs extending hands or arms 12:34 GPT3 as a writing aid. Chatting with dead philosophers 16:30 Lost in the hermeneutic hall of mirrors 19:59 A metaphor for symbol grounding? 21:54 Further on what symbol grounding is.. categories from propositions - affording the capacity to acquire and transmit grounded categories through language (instead of by trial and error) 25:34 Artificial life - mushroom (toy) example of symbol grounding 32:30 Grounding categories are not just concrete objects, but can be abstract concepts (i.e. sharpness) 33:42 GPT3 Goethe revisited. 37:22 Vanishing intersections. Chompsky's work on universal grammar (syntax) & the poverty of stimulus. 48:34 How do humans learn categories? 3 ways: supervised, unsupervised and through language 52:28 What happens when we find evidence to dis-confirm something we have already learned? (i.e. a black swan) 55:46 Natural kinds 59:05 Natural kinds by transition? 02:15 Essences and transitions 05:48 How much of our current behavior have homolouges in the ancestral environment? 10 Selective behavior - example of the peacock's tail that controls for cheating 12:15 Paraciticm 14:13 To what degree did our capacity for generalization come before grammar? 14:48 Learning, Skinner boxes, abstraction and categorization. Is (category) learning itself is the mother of all generalizations? 18:58 Dictionaries & Grounding sets 19:53 What is meant by a dictionary? The nucleus (can define all words inside and outside itself), the core (can define all inside itself), satellites (around the core, tiny clusters of words). Minimal grounding sets are not dictionaries - they can only define words outside of itself, it is also the smallest amount of words which can define all the rest of the words by combinations of words - turns out there are many of them - so finding the ultimate grounding set is an N-P complete problem (using a directed graph). All grounding sets are part core and part satellite. The size of the minimum grounding set is between 750 to 1500 words. 30:48 Grounding sets, and the dictionary game. 35:03 The easy and the hard problems of consciousness. Why is the hard problem of consciousness hard? 39:56 Shared cognition? Shared experience? The case of the Hogan sisters, Siamese twins conjoined at the head - sharing parts of their brains. 44:21 The identity of indiscernibles 48:09 Renounce ontology! Psychologists Stevan Harnad is a naive realists, he is interested in what organisms can do, and how they feel. 49:28 Feeling primitives - can feels be reduced to monads or are they complex?
Stevan Harnad - Symbol Grounding part 2Science, Technology & the Future2020-12-03 | Further discussion with Stevan Harnad on Symbol Grounding - including discussion on Extended Cognition, consciousness, meaning, hive minds, GPT-3, a-life, grounding categories, vanishing intersections, natural kinds, essences & transitions, evolutionary psychology, learning & skinner boxes, dictionaries & grounding sets, shared cognition, the identity of indiscernibles, ontology and feeling primitives.
02:33 extended cognition vs extended consciousness 03:57 Does every part of our modular brain contribute to consciousness? 05:04 Disagreement on the meaning of the word 'cognition' 07:25 Wittgenstein - the meaning of a word is it's use 08:51 Shared grounding in a hive? 10:40 Extending reach vs extending hands or arms 12:34 GPT3 as a writing aid. Chatting with dead philosophers 16:30 Lost in the hermeneutic hall of mirrors 19:59 A metaphor for symbol grounding? 21:54 Further on what symbol grounding is.. categories from propositions - affording the capacity to acquire and transmit grounded categories through language (instead of by trial and error) 25:34 Artificial life - mushroom (toy) example of symbol grounding 32:30 Grounding categories are not just concrete objects, but can be abstract concepts (i.e. sharpness) 33:42 GPT3 Goethe revisited. 37:22 Vanishing intersections. Chompsky's work on universal grammar (syntax) & the poverty of stimulus. 48:34 How do humans learn categories? 3 ways: supervised, unsupervised and through language 52:28 What happens when we find evidence to dis-confirm something we have already learned? (i.e. a black swan) 55:46 Natural kinds 59:05 Natural kinds by transition? 02:15 Essences and transitions 05:48 How much of our current behavior have homolouges in the ancestral environment? 10 Selective behavior - example of the peacock's tail that controls for cheating 12:15 Paraciticm 14:13 To what degree did our capacity for generalization come before grammar? 14:48 Learning, Skinner boxes, abstraction and categorization. Is (category) learning itself is the mother of all generalizations? 18:58 Dictionaries & Grounding sets 19:53 What is meant by a dictionary? The nucleus (can define all words inside and outside itself), the core (can define all inside itself), satellites (around the core, tiny clusters of words). Minimal grounding sets are not dictionaries - they can only define words outside of itself, it is also the smallest amount of words which can define all the rest of the words by combinations of words - turns out there are many of them - so finding the ultimate grounding set is an N-P complete problem (using a directed graph). All grounding sets are part core and part satellite. The size of the minimum grounding set is between 750 to 1500 words. 30:48 Grounding sets, and the dictionary game. 35:03 The easy and the hard problems of consciousness. Why is the hard problem of consciousness hard? 39:56 Shared cognition? Shared experience? The case of the Hogan sisters, Siamese twins conjoined at the head - sharing parts of their brains. 44:21 The identity of indiscernibles 48:09 Renounce ontology! Psychologists Stevan Harnad is a naive realists, he is interested in what organisms can do, and how they feel. 49:28 Feeling primitives - can feels be reduced to monads or are they complex?
Kind regards, Adam Ford - Science, Technology & the Future - #SciFuture - http://scifuture.orgExtended Arm - STELARC (2022)Science, Technology & the Future2022-09-11 | Stelarc discusses the extended arm in this interview with Adam Ford. "The Extended Arm is an eleven-degree-of-freedom manipulator with wrist flexion, wrist rotation, thumb rotation, individual finger flexion, with each finger splitting open, so each finger can potentially be a gripper in itself. The artist’s fingers rest on a panel of switches enabling the selection of pre-programmed sequences of finger, thumb and wrist movements. The clicking fingers, the compressed air and solenoid generate the sounds when performing. The Extended Arm extends the artist’s right arm to primate proportions. "
Kind regards, Adam Ford - Science, Technology & the Future - #SciFuture - http://scifuture.orgAI, Sentience & the Binding Problem of Consciousness - is LaMDA sentient? - Andrés Gómez EmilssonScience, Technology & the Future2022-07-14 | Is LaMDA at Google sentient? Is the current state of the art AI showing signs of having qualia? The phenomenal binding problem asks us to consider, 'how can huge set of discrete neurons form a unified mind?' Is topological binding a requirement for AI to be sentient?
Kind regards, Adam Ford - Science, Technology & the Future - #SciFuture - http://scifuture.orgTrustworthy AI and Machine Understanding - Ben Goertzel, Joscha Bach, Monica AndersonScience, Technology & the Future2022-06-17 | The state of the art deep learning models do not really embody understanding. This panel will focusing on AGI transparency, auditability and explainability.. differences btw causal understanding and prediction as well as surrounding practical / systemic / ethical issues. Panelists include AI experts Joscha Bach, Ben Goertzel and Monica Anderson!
0:00 Intro 1:00 Monica Anderson on what is understanding? 2:00 Joscha Bach answers what is understanding? 3:12 discrepancies in descriptions of understanding 3:54 Joscha on creativity vs deciding and understanding 7:19 Context free models vs context containing models (Monica) 10:18 Modelling & Embodiment (Joscha) 14:35 Language models (Monica & Joscha) 17:17 Are causal models required for understanding? 18:26 Can imitation become understanding? 20:26 Can we share understanding? 21:08 Systematic diff btw people driving cars vs self driving cars 22:45 Embodiment and symbol grounding. Do people share symbol groundings? 24:02 How is symbol grounding done? (Ben Goertzel, Monica, Joscha) 25:42 Language understanding, and what's required to achieve it, turing test 29:57 Trustworthy AI, human aesthetics - how to build a god, or keep the world pre-singularity 34:28 Trustworthyness of useful tools vs superhuman AGI 38:57 Open-ended intelligence 41:34 Why a panel on trustworthy AI? 44:24 Proscription of intentionality onto weak AI - does the illusion make us happy? 46:19 Sophia the robot - useful simulacrum or an abomination? a salty divergence of opinion 49:47 Automated confabulation in GPT3 - confabulation vs real explanation 52:40 Verification of levels of trustworthyness in AGI - how far can we take it? 58:48 Can everything ethically important be understood by humans? (Joscha) 59:19 GPT-3 confabulation vs human confabulation 01:00:22 Do we want AI to follow anthropomorphic ethics, or open ethics? Status quo ethics, or variable ethics? 01:07:24 Prefered game conditions in a future shared with AGI 01:12:10 Farewell Joscha Bach, welcome Hugo de Garis 01:13:27 Post-singularity family planning (and identity) 01:15:00 Governmental interest in AGI etc 01:18:31 Government interest in the threats of AI/AGI 01:21:07 Image generation, deep fakes and technology to validate truth 01:25:09 AI research in China vs the west 01:27:48 AGI and geopolitics 01:29:32 AGI chip to speed up pattern matching 01:35:52 Pandemic preparedness - how will the world deal with a far worse pandemic than Covid-19?
Can the blackbox problem be fully solved without machine understanding (the AI actually ‘understanding’ rather than i.e. merely making predictions across massive datasets)?
Will an add-on explanation modules be enough to make AI trustworthy?
Can imitation become understanding? Or do we need to develop an entirely different approach to AI than the
Kind regards, Adam Ford - Science, Technology & the Future - #SciFuture - http://scifuture.orgMonica Anderson - The Red Pill of Machine LearningScience, Technology & the Future2022-06-17 | Synopsis: The new cognitive capabilities in our machines were the result of a shift in the way we think about problem solving. The shift is the most significant change in AI, ever, if not in science as a whole. Machine Learning based systems are now successfully attacking both simple and complex problems using these novel Methods.
We are experiencing a revolution at the level of Epistemology which will affect much more than just the field of Machine Learning. We want to add more of these new Methods to our standard problem solving toolkit, but we need to understand the tradeoffs.
Bio: Monica Anderson, MSCS, is an independent AI and ML researcher and founder of Syntience Inc.
Her work has focused on the Epistemology of AI but all her theory is based on her experiences of design and implementation of (Human Language) Understanding Machines based on Deep Discrete Neuron Networks since Jan 1, 2001.
She can adopt a Holistic or Reductionist stance as needed, and wants to teach others how to switch. Her current projects include creating a social medium where chat messages are routed by an Understanding Machine. She has been awarded a handful of patents in this field.
She is an ex-Googler, has facilitated 100+ Bay Area AI meetup sessions over 5 years, and plays keyboards and Bridge.
Kind regards, Adam Ford - Science, Technology & the Future - #SciFuture - http://scifuture.orgJoscha Bach - Agency in an Age of MachinesScience, Technology & the Future2022-06-13 | Synopsis: The arrival of homo sapiens on Earth amounted to a singularity for its ecosystems, a transition that dramatically changed the distribution and interaction of living species within a relatively short amount of time. Such transitions are not unprecedented during the evolution of life, but machine intelligence represents a new phenomenon: for the first time, there are agents on earth that are not part of the biosphere. Instead of competing for a niche in the ecosystems of living systems, AI might compete with life itself.
How can we understand agency in the context of the cooperation and competition between AI, humans and other organisms?
0:00 Introduction 1:14 Presentation starts 1:48 Spirits & western confusion about consciousness 5:48 Genesis: an updated version of the origin story (6 stages) 13:30 The history of studying agency 15:07 Today's models & AI systems 18:35 Cybernetics: modeling in the service of control 22:09 Computation vs. cybernetics 24:29 How do neurons compute minds? 26:45 Neural circuits in artificial neural networks 28:17 Is the circuit metaphor wrong? Self organization in biological neurons & Neural Darwinism 32:22 Conscious seed theory (technological design vs. organic growth) 38:31 Hierarchy & design constraints of causal systems, groups, state governments & agents 43:55 The society of mind, self regulation & the consciousness prior 48:53 Attention as an agent & role of consciousness 51:19 Society of minds: human intellect & civilization intellect 53:44 Stages of intelligent agency (societal agency, Maslow's hierarchy, "sacredness") 57:57 Principles for emergent higher level agency (7 virtues) 1:02:20 The alignment problem 1:07:26 Q&A
Bio: Joscha Bach, Ph.D. is an AI researcher who worked and published about cognitive architectures, mental representation, emotion, social modeling, and multi-agent systems. He earned his Ph.D. in cognitive science from the University of Osnabrück, Germany, and has built computational models of motivated decision making, perception, categorization, and concept-formation. He is especially interested in the philosophy of AI and in the augmentation of the human mind.
Joscha has taught computer science, AI, and cognitive science at the Humboldt-University of Berlin and the Institute for Cognitive Science at Osnabrück.
Kind regards, Adam Ford - Science, Technology & the Future - #SciFuture - http://scifuture.orgBen Goertzel - Open Ended vs Closed Minded Conceptions of SuperintelligenceScience, Technology & the Future2022-06-12 | Abstract: Superintelligence, the next phase beyond today’s narrow AI and tomorrow’s AGI, almost intrinsically evades our attempts at detailed comprehension. Yet very different perspectives on superintelligence exist today and have concrete influence on thinking about matters ranging from AGI architectures to technology regulation. One paradigm considers superintelligences as resembling modern deep reinforcement learning systems, obsessively concerned with optimizing particular goal functions. Another considers superintelligences as open-ended, complex evolving systems, ongoingly balancing drives toward individuation and radical self-transcendence in a paraconsistent way. In this talk I will argue that the open-ended conception of superintelligence is both more desirable and more realistic, and will discuss how concrete work being done today on projects like OpenCog Hyperon, SingularityNET and Hypercycle potentially paves the way for a path through beneficial decentralized integrative AGI and on to open-ended superintelligence and ultimately the Singularity.
Bio: In May 2007, Goertzel spoke at a Google tech talk about his approach to creating artificial general intelligence. He defines intelligence as the ability to detect patterns in the world and in the agent itself, measurable in terms of emergent behavior of “achieving complex goals in complex environments”. A “baby-like” artificial intelligence is initialized, then trained as an agent in a simulated or virtual world such as Second Life to produce a more powerful intelligence. Knowledge is represented in a network whose nodes and links carry probabilistic truth values as well as “attention values”, with the attention values resembling the weights in a neural network. Several algorithms operate on this network, the central one being a combination of a probabilistic inference engine and a custom version of evolutionary programming.
Kind regards, Adam Ford - Science, Technology & the Future - #SciFuture - http://scifuture.orgThe Goodness of the Universe - John SmartScience, Technology & the Future2022-06-07 | Outer Space, Inner Space, and the Future of Networks Synopsis: Does the History, Dynamics, and Structure of our Universe give any evidence that it is inherently “Good”? Does it appear to be statistically protective of adapted complexity and intelligence? Which aspects of the big history of our universe appear to be random? Which are predictable? What drives universal and societal accelerating change, and why have they both been so stable? What has developed progressively in our universe, as opposed to merely evolving randomly? Will humanity’s future be to venture to the stars (outer space) or will we increasingly escape our physical universe, into physical and virtual inner space (the transcension hypothesis)? In Earth’s big history, what can we say about what has survived and improved? Do we see any progressive improvement in humanity’s thoughts or actions? When is anthropogenic risk existential or developmental (growing pains)? In either case, how can we minimize such risk? What values do well-built networks have? What can we learn about the nature of our most adaptive complex networks, to improve our personal, team, organizational, societal, global, and universal futures? I’ll touch on each of these vital questions, which I’ve been researching and writing about since 1999, and discussing with a community of scholars at Evo-Devo Universe (join us!) since 2008.
For fun background reading, see John’s Goodness of the Universe post on Centauri Dreams, and “Evolutionary Development: A Universal Perspective”, 2019.
John writes about Foresight Development (personal, team, organizational, societal, global, and universal), Accelerating Change, Evolutionary Development (Evo-Devo), Complex Adaptive Systems, Big History, Astrobiology, Outer and Inner Space, Human-Machine Merger, the Future of AI, Neuroscience, Mind Uploading, Cryonics and Brain Preservation, Postbiological Life, and the Values of Well-Built Networks. He is CEO of Foresight University, founder of the Acceleration Studies Foundation, and co-founder of the Evo-Devo Universe research community, and the Brain Preservation Foundation. He is editor of Evolution, Development, and Complexity (Springer 2019), and Introduction to Foresight: Personal, Team, and Organizational Adaptiveness (Foresight U Press 2022). He is also author of The Transcension Hypothesis (2011), the proposal that universal development guides leading adaptive networks increasingly into physical and virtual inner space.
Kind regards, Adam Ford - Science, Technology & the Future - #SciFuture - http://scifuture.orgStuart Armstrong - How Could We Align AI?Science, Technology & the Future2022-06-07 | Synopsis: The goal of Aligned AI is to implement scalable solutions to the alignment problem, and distribute these solutions to actors developing powerful transformative artificial intelligence. What is Alignment?
Algorithms are shaping the present and will shape the future ever more strongly. It is crucially important that these powerful algorithms be aligned – that they act in the interests of their designers, their users, and humanity as a whole. Failure to align them could lead to catastrophic results.
Our long experience in the field of AI safety has identified the key bottleneck for solving alignment: concept extrapolation. What is Concept Extrapolation?
Algorithms typically fail when they are confronted with new situations – they go out of distribution. Their training data will never be enough to deal with all unexpected situations – thus an AI will need to safely extend key concepts and goals, similarly – or better – to how humans do it.
This is concept extrapolation, explained in more details in this sequence. Solving the concept extrapolation problem is both necessary and almost sufficient for solving the whole AI alignment problem.
This talk is part of the ‘Stepping Into the Future‘ conference.
Bio: Dr Stuart Armstrong, Co-Founder and Chief Research Officer
Previously a Researcher at the University of Oxford’s Future of Humanity Institute, Stuart is a mathematician and philosopher and the originator of the value extrapolation approach to artificial intelligence alignment. He has extensive expertise in AI alignment research, having pioneered such ideas as interruptibility, low-impact AIs, counterfactual Oracle AIs, the difficulty/impossibility of AIs learning human preferences without assumptions, and how to nevertheless learn these preferences. Along with journal and conference publications, he posts his research extensively on the Alignment Forum.
Kind regards, Adam Ford - Science, Technology & the Future - #SciFuture - http://scifuture.orgAGI via Deep Neuro & Bio-mimicry - John Smart (short)Science, Technology & the Future2022-06-07 | A short discussion before John Smart's talk at the Stepping Into the Future conference where he discusses his idea that the only easy path to general intelligence is via neuro and biomimicry.
Kind regards, Adam Ford - Science, Technology & the Future - #SciFuture - http://scifuture.orgAnders Sandberg - Grand Futures – Thinking Truly Long TermScience, Technology & the Future2022-06-06 | Synopsis: How can we think rigorously about the far future, and use this to guide near-term projects? In this talk I will outline my “grand futures” project of mapping the limits of what advanced civilizations can achieve – in terms of survival, expanding in space, computation, mastery over matter and energy, and so on – and how this may interact with different theories about what truly has value.
For some fun background reading, see ‘What is the upper limit of value?‘ which Anders Sandberg co-authored with David Manheim.
This talk is part of the ‘Stepping Into the Future‘ conference.
Anders Sandberg is a senior research fellow at the Future of Humanity Institute at the University of Oxford and research associate at the Institute for Future Studies in Stockholm. Anders background is computational neuroscience, but for the past 20 years he has been working on neuroethics, global catastrophic risk, long-range futures and reasoning about uncertainty.
Kind regards, Adam Ford - Science, Technology & the Future - #SciFuture - http://scifuture.orgParadise Engineering - David Pearce & Andrés Gómez EmilssonScience, Technology & the Future2022-05-23 | What is the most wonderful experience you have had in your life?
Now imagine if every moment in your life could be as good as this experience, or even better. Other things being equal, wouldn’t it be nice if we had higher quality lives?
For much of history talk of ‘paradise engineering’ would simply be dismissed as utopian dreaming. Though throughout the course of civilization humanity has been trying to improve it’s lot by manipulating it’s environment in innumerable different ways – yet, to be honest on the inside we’re not significantly happier now than ancestors on the African savanna – certainly not if suicide, depression and marital breakup statistics etc. are taken seriously.
Kind regards, Adam Ford - Science, Technology & the Future - #SciFuture - http://scifuture.orgThe Future of Consciousness – Andrés Gómez EmilssonScience, Technology & the Future2022-05-16 | Synopsis: In this talk we articulate a positive vision of the future that is both viable given what we know, and also utterly radical in its implications. We introduce two key insights that, when taken together, synergize in powerful ways. Namely, (a) the long-tails of pleasure and pain, and (b) the correlation between wellbeing, productivity, and intelligence. This informs us how to distribute resources if we want to maximize wellbeing. Given the weight of the extremes, it is important to take them into account. But because of the causal significance of more typical hedonic ranges, engineering our baseline is a key consideration. This makes it natural to break down the task of paradise engineering into three components:
(1) avoid negative extremes, (2) increase hedonic baseline, and (3) achieve new heights of experience.
With regards to (1): the future of consciousness is anodyne. It lacks extreme suffering in any of its guises. We will see how, if we aim right, a significant proportion of extreme suffering can be prevented with pragmatic technologies already available. Even just applying what we know today would be as significant for the reduction of suffering as the advent of anesthesia was in the context of surgery.
On (2): the future of consciousness is engaging. From novelty generation to Buddhist annealing, baseline-enhancing interventions will change the way we think of life. It is not only about making everyday fun, but also the economics of it.
And (3): the future of consciousness is ecstatic. A science of ecstasy will allow us to safely and reliably sample from a wide range of time-tested ultra-blissful peak experiences. A common cause with other sentient beings, and indeed with the interests of consciousness at large, can be forged in the knowledge of such deep experiences.
They give you a genuine, non-sentimental, reason to live. Together, action on these three levels can significantly advance the cause of eliminating suffering and engineering paradise. And our assessment is: there is a lot of low-hanging fruit in this space. Let’s pick it up!
Kind regards, Adam Ford - Science, Technology & the Future - #SciFuture - http://scifuture.orgDavid Pearce - The End of Suffering: Genome Reform and the Future of SentienceScience, Technology & the Future2022-05-08 | Synopsis: No sentient being in the evolutionary history of life has enjoyed good health as defined by the World Health Organization. The founding constitution of the World Health Organization commits the international community to a daringly ambitious conception of health: “a state of complete physical, mental and social wellbeing”. Health as so conceived is inconsistent with evolution via natural selection. Lifelong good health is inconsistent with a Darwinian genome. Indeed, the vision of the World Health Organization evokes the World Transhumanist Association. Transhumanists aspire to a civilization of superhappiness, superlongevity and superintelligence; but even an architecture of mind based on information-sensitive gradients of bliss cannot yield complete well-being. Post-Darwinian life will be sublime, but “complete” well-being is posthuman – more akin to Buddhist nirvana. So the aim of this talk is twofold. First, I shall explore the therapeutic interventions needed to underwrite the WHO conception of good health for everyone – or rather, a recognisable approximation of lifelong good health. What genes, allelic combinations and metabolic pathways must be targeted to deliver a biohappiness revolution: life based entirely on gradients of well-being? How can we devise a more civilized signalling system for human and nonhuman animal life than gradients of mental and physical pain? Secondly, how can genome reformists shift the Overton window of political discourse in favour of hedonic uplift? How can prospective parents worldwide – and the World Health Organization – be encouraged to embrace genome reform? For only germline engineering can fix the problem of suffering and create a happy biosphere for all sentient beings.
A big thanks to Adam James Davies for doing the chapters!
0:00 Introduction/beginning 0:07 The Biohappiness Revolution 1:53 Paradise Engineering: when? How? 3:21 Jo Cameron and anandamide 4:30 World Health Organisation and a hundred-year plan to end suffering 5:34 Intro ends; the live presentation by David Pearce begins… 6:30 Our ancestors on the African Savannah 7:40 The daunting scale of the project ahead 8:07 ‘Flagship’ Chinese CRISPR babies and missed opportunities 9:44 Our ‘volume knobs’ for pain and nonsense mutations 12:20 Physical pain, psychological pain and Jo Cameron 14:10 Questions opening 15:07 Hugo de Garis: what is state-of-the-art within CRISPR and genetic engineering? 18:00 Nick Bostrom’s appearance on Joe Rogan’s show! The need for charismatic leadership within the suffering-abolitionist movement 19:50 Andres Gomez Emilsson: the necessity of enhancing more than one characteristic, for example: intelligence as well as hedonic set-points. 20:28 The pitfalls of enhancing intelligence 23:40 “Life in the Year 3000”, and the likelihood of nuclear war 25:03 Hugo: how ambitious should we be to begin with considering the sheer number of genes in the human genome? 26:05 Cloning super-geniuses like John von Neumann 27:20 The inherent ignorance of Turing Machines and classical digital computers 28:50 Solving the Phenomenal Binding Problem, and obscure disorders of various types rooted in the breakdowns of phenomenal binding. 31:04 Question from ? How do you test if a system has solved the Binding Problem? P-Zombies, et cetera 32:50 Strong emergence is like magic? 33:50 More on the Binding Problem and quantum mind 36:56 An appraisal of Andres and his knowledge about consciousness 37:15 More on genetic engineering, looping back round to Hugo’s last question; gene therapy’s importance role to play in ending suffering 38:29 Another question from ? Can superhappiness ‘naturally’ follow from intelligence enhancement, or vice versa? 40:30 The abolitionist project is already technically feasible for both humans and non-humans - it is not sci-fi! 42:30 successfully engineering super-intelligence might be more of a challenge than even ending suffering! 43:29 Another question from ? Can there be a formal mathematical language for philosophical and metaphysical statements? 45:35 Hugo: did Leibniz think about this problem? (No answer) 46:49 Adam Ford reminds guests and audience of the next presentation set to begin soon 47:33 Adam asks David about the mainstream normalisation of suffering-abolitionism 49:08 Adam asks David about his thoughts on Yuval Noah Harari and his ideas 50:20 Neil asks everyone, “What would it feel like to be Jo Cameron?” 50:50 Anders Sandberg’s ‘ridiculously high’ hedonic set-point, and others with similar ‘conditions’ (see ‘hyperthymia’, for example) 52:55 Andres asks, “is there an ideal state of consciousness?” 56:50 end
Many thanks for tuning in!
Kind regards, Adam Ford - Science, Technology & the Future - #SciFuture - http://scifuture.orgAre we fit for the future?Science, Technology & the Future2022-04-30 | Panelists: James Hughes, PJ Manney (both at IEET), and Pramod Nayar (Hyderabad Uni) discuss humanities fitness for the future – covering important points including:
- Are we morally equipped to deal with humanities grand challenges? - If the majority population of a democratic state were morally deficient, would it be okay to morally enhance the population, or does this cross the line (i.e. by manipulating the population’s will)? - Who’s morals? - Who are the ones to be morally enhanced? - Will it be compulsory? - Won’t taking a morality pill decrease the value of the intended morality if it skips the difficult process we normally go through to become better people? - Shouldn’t people be concerned that use of enhancements which alter character traits might consumer’s authenticity? - How can we alleviate aspects of the dark factor of personality (d factor) today, and in the future?
Kind regards, Adam Ford - Science, Technology & the Future - #SciFuture - http://scifuture.orgCyborg Virtues: Using BCIs for Moral Enhancement - James HughesScience, Technology & the Future2022-04-30 | James Hughes discusses the neuroanatomy of moral cognition, and the potential for brain stimulation and brain-computer interfaces to modulate moral emotions, cognition and behavior.
Synopsis: Links between brain structures and cognition began with studies of victims of brain injuries, and became more precise with advances in brain imaging. In the last two decades research has demonstrated that moral emotions and cognition can be modulated with internal and external stimulation focused on particular brain structures. While non-invasive methods of neuromodulation, like transcranial direct current stimulation, are widely available for the healthy, their effects are more diffuse and uncertain. Deep brain stimulation electrodes or implanted computer chips allow more precise sensing and stimulation, but are only applicable for severe conditions such as intractable epilepsy and treatment-resistant depression. As BCIs are miniaturized and given more capacities they will be more feasible for use by those without severe disabilities. Soon hundreds or thousands of microscopic computer chips, sensors and electrodes implanted in the brain will allow real-time sensing, inhibition and boosting of thoughts and emotions, opening up morally enhancing applications. Individuals with brain disorders that lead to violence and criminality, for instance, could be offered BCI therapy as an alternative to psychiatric treatment or incarceration. This essay proposes a model of six virtues that could be targets of neuromodulation: self-control, caring, intelligence, positivity, fairness and transcendence. Key parts of the brain implicated in the functioning of each virtue are reviewed as possible targets for morally enhancing neuromodulation.
Kind regards, Adam Ford - Science, Technology & the Future - #SciFuture - http://scifuture.orgPosthumanism and its Moral Imperatives – Pramod K. NayarScience, Technology & the Future2022-04-30 | Octavia Butler’s fiction underscores heightened empathy as a possible feature of the future humans (who may be co-evolved with alien species, in Butler’s imagination). Yet, in Butler’s fiction, the morally enhanced beings ponder over the freedom they now possess. This talk, building on the view that ME requires multiple virtues (James Hughes), examines the linkage of ME with human freedom and/or autonomy. Pramod K. Nayar
Bio: Pramod Nayar teaches M.A. courses in Literary Theory, the English Romantics and Postcolonial Literatures. His interests lie in English colonial writings on India, travel writing, Human Rights and narratives, posthumanism, postcolonial literature, Cultural Studies (celebrity studies, digital cultures) literary & cultural theory and graphic novels, with significant and regular publications in these areas.
Kind regards, Adam Ford - Science, Technology & the Future - #SciFuture - http://scifuture.orgForesight Superpowers - John SmartScience, Technology & the Future2022-03-22 | John Smart gives an outline of topics in his new book 'Introduction to Foresight: Personal, Team, and Organizational Adaptiveness'.
John Smart is a futurist and scholar of accelerating change. He is CEO of Foresight University, founder of the Acceleration Studies Foundation, and co-founder of the Evo-Devo Universe research community, and the Brain Preservation Foundation. He is editor of Evolution, Development, and Complexity (Springer 2019), and Introduction to Foresight: Personal, Team, and Organizational Adaptiveness (Foresight U Press 2022). He is also author of The Transcension Hypothesis (2011), the proposal that universal development guides leading adaptive networks increasingly into physical and virtual inner space.
Kind regards, Adam Ford - Science, Technology & the Future - #SciFuture - http://scifuture.orgJames Hughes - The Future of WorkScience, Technology & the Future2022-03-12 | The pandemic has launched a debate about the future of work around the world. Those who can work remotely have often found they prefer remote or flexible, hybrid options. The Great Resignation has put upward pressure on wages and benefits in the service sector, encouraging the implementation of automation. Climate change mitigation is encouraging a shift towards “green jobs.” Rapid changes in the labor market have made the payoffs of higher education uncertain for young people, while many societies are entering an old-age dependency crisis with too few workers paying taxes for growing numbers of pensioners. Before the pandemic proposals for universal basic income (#UBI) were seen as necessary adaptations to imminent technological unemployment, and the during the pandemic many countries provided temporary UBI to keep people safe. We are now poised for a global discussion about whether we need to work at all, and what kinds of jobs are desirable.
Kind regards, Adam Ford - Science, Technology & the Future - #SciFuture - http://scifuture.orgAndrés Gómez Emilsson - The Aesthetic of the Meta AestheticScience, Technology & the Future2022-03-03 | ..the Meaning Nexus Between Memeplexes. Are there facts about whether something is beautiful, or good art, or are such things purely a matter of opinion?
Synopsis: In the spirit of fostering a collaborative relationship between the memeplexes that currently occupy the minds of the post-political intelligentsia, Andrés shares a conceptual framework he believes is useful for sense-making independently of one’s subcultural affiliation. Namely, he will share a theory of aesthetics.
Aesthetics go much deeper than merely the preference one may have for clusters and correlations of sensorial patterns. Aesthetics, in fact, cut to the very root of our concept of identity.
Inspired by Rob Burbea’s Soulmaking, Andrés will discuss how aesthetics can be broken down into: (1) Eros – the set of images that energize one’s thirst for life, (2) Psyche – the network of relationships between Eros imagery, and (3) Logos – the overarching ontology upon which Psyche and Eros are based.
Andrés discusses how these components emerge from specific philosophical background assumptions, are then adopted as social aesthetics, and ultimately risk becoming merely tribal markers. In so far as people are caught up in the dissonance between aesthetics without understanding the Logos that breaths life into them, they will continue to fight in unproductive ways. Ultimately, a careful map of the valence that an aesthetic associates to each symbol will allow us to create a music theory of aesthetics and liberate people from the burden of pointless memetic wars. Meaning that, we can predict in advance what kind of discussions are likely to break down due to different valences on key load-bearing symbols, and re-route them through a different path that nonetheless achieves the desired information processing. An understanding of how aesthetics bias our valuations would itself be an aesthetic, of course: the aesthetic of the meta-aesthetic.
This talk argues that such a meta-aesthetic could become the nexus that allows us to “get the best of each world”. The end-goal: to make aesthetic pluralism game-theoretically stable.
Kind regards, Adam Ford - Science, Technology & the Future - #SciFuture - http://scifuture.orgJames Hughes - NATO & the Russia / Ukraine ConflictScience, Technology & the Future2022-03-02 | James Hughes (ED of IEET, socialist anti-communist democrat) discusses the Russia/Ukraine conflict in relation to the history of NATO and Russia, and the social democratic project. He then discusses global catastrophic risks of nuclear and biotech and the problems surrounding fake news.
The Institute for Ethics and Emerging Technology - ieet.org
Kind regards, Adam Ford - Science, Technology & the Future - #SciFuture - http://scifuture.orgDavid Pearce - The Biohappiness RevolutionScience, Technology & the Future2021-10-06 | Philosopher David Pearce discusses the Biohappiness Revolution, and his forthcoming book. What is health? According to WHO: "Health is a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity."
Further on the rights to health: "The enjoyment of the highest attainable standard of health is one of the fundamental rights of every human being without distinction of race, religion, political belief, economic or social condition." Read more of the Constitution of the World Health Organization (1948) ref: https://www.who.int/governance/eb/who_constitution_en.pdfBackground reading: en.wikipedia.org/wiki/Biohappiness
Kind regards, Adam Ford - Science, Technology & the Future - #SciFuture - http://scifuture.orgRobin Hanson - UFOs what the hell?Science, Technology & the Future2021-10-04 | Robin Hanson gives a talk (starting at 5:31) and addresses questions afterwards on building a plausible story (although perhaps unlikely) of UFA/UFO aliens.
"Yes, the universe looks completely dead; we see no signs of life outside Earth, even though over millions of years advanced aliens could have made some big visible changes. Some possible explanations:
1. Aliens arise so rarely that the nearest ones are too far to see, or to have travelled to here, 2. Aliens are common but simply can’t travel between stars or make big visible changes, 3. Aliens are common and travel everywhere, but enforce rules against visible changes, or 4. Aliens arise rarely, but in small clumps; the first in clump to appear can control the others.
Of these, only the last two can put aliens here now, and #3 seems too much a conspiracy (i.e., coordinate to hide) theory for my tastes. But scenario #4 works, and could plausibly result from “panspermia.”
That is, simple life might have arisen on a planet Eden long ago, via a very rare event. (My research suggests this happens only once per million galaxies.) After life evolved at Eden for billions of years, a rock hit Eden, kicking up another rock that drifted for millions of years carrying life to seed our Sun’s stellar nursery. A nursery that held thousands of new stars packed close with many rocks flying around, allowing life to spread quickly to them all."
Many thanks to those who participated in the Q&A session.
Kind regards, Adam Ford - Science, Technology & the Future - #SciFuture - http://scifuture.orgAnders Sandberg - Aliens, Bayesians and Blurry Footage of UFOsScience, Technology & the Future2021-09-04 | Are we alone in our sector of the cosmos? Do sightings of unidentified aerial phenomena herald an alien invasion? The declassification of US military footage has rekindled a fiery feud - but before committing to a position in this debate, how can we assess the likelihood (given the existing evidence), that we are being visited by ETs with technology far superior to our own?
Anders Sandberg beamed in to make certain disclosures about Bayesian statistics applied to recent UAP/UFO 'sightings' - so let's all put on our thinking caps - and if you like yours silver, shiny and foiled that's fine too..
We also spoke about convergences in cognition and ethics which applies not only to aliens, but AI.
Kind regards, Adam Ford - Science, Technology & the Future - #SciFuture - http://scifuture.orgBen Goertzel - Approaches Towards a General Theory of General AIScience, Technology & the Future2021-06-06 | The General Theory of General Intelligence: A Pragmatic Patternist Perspective - paper by Ben Goertzel: arxiv.org/abs/2103.15100 Abstract: "A multi-decade exploration into the theoretical foundations of artificial and natural general intelligence, which has been expressed in a series of books and papers and used to guide a series of practical and research-prototype software systems, is reviewed at a moderate level of detail. The review covers underlying philosophies (patternist philosophy of mind, foundational phenomenological and logical ontology), formalizations of the concept of intelligence, and a proposed high level architecture for AGI systems partly driven by these formalizations and philosophies. The implementation of specific cognitive processes such as logical reasoning, program learning, clustering and attention allocation in the context and language of this high level architecture is considered, as is the importance of a common (e.g. typed metagraph based) knowledge representation for enabling "cognitive synergy" between the various processes. The specifics of human-like cognitive architecture are presented as manifestations of these general principles, and key aspects of machine consciousness and machine ethics are also treated in this context. Lessons for practical implementation of advanced AGI in frameworks such as OpenCog Hyperon are briefly considered." Talk held at AGI17 - http://agi-conference.org/2017/#AGI17 #AGI #ArtificialIntelligence #Understanding #MachineUnderstanding #CommonSence #ArtificialGeneralIntelligence #PhilMinden.wikipedia.org/wiki/Artificial_general_intelligenceMany thanks for tuning in!
Kind regards, Adam Ford - Science, Technology & the Future - #SciFuture - http://scifuture.orgHugo de Garis - AI, Species Dominance and Our Cybernetic FutureScience, Technology & the Future2021-05-08 | Hugo de Garis on AI, the story leading up to where we are now, and the possibilities for AI in the not too distant future. We have seen AI sprint past us in many cognitive domains, and in the coming decades we will likely see AI creep up on human level intelligence in other domains - once this becomes apparent, AI will become a central political issue - and nations will try to out-compete each other in dangerous AI arms-race. As AI encroaches further into areas of economic usefulness where humans traditionally dominated, how might avoid uselessness and stay relevant? Merge with the machines say's Hugo.
Kind regards, Adam Ford - Science, Technology & the Future - #SciFuture - http://scifuture.orgArtificial Intelligence will be Smarter than You in Your LifetimeScience, Technology & the Future2021-04-25 | Adam Ford argues for the position that Artificial Intelligence Will Be Smarter Than Humans in the lifetime of a young adult (sometime before the end of this century). This was one side of a debate put on my Melbourne University for a philosophy course. Adam discusses reasons why AI becoming smarter than human level intelligence is likely and important - approaches to thinking about the issue, what evidence there is for AI becoming superintelligent, what experts think about this issue, the history of the idea., the potential outcomes of superhuman AI, what's at stake.. and more.
Why discuss this issue? Why is AI important? Intelligence is powerful, it's a force multiplier.
Kind regards, Adam Ford - Science, Technology & the Future - #SciFuture - http://scifuture.orgDarwin Day Interviews - A Celebration of Science & Reason on the 12th of FebScience, Technology & the Future2021-02-10 | Darwin Day is held on the 12 of February - it's a celebration to commemorate the birthday of Charles Darwin on 12 February 1809. The day is used to highlight Darwin's contributions to science and to promote science in general. Darwin Day is celebrated around the world - this collection of interviews were done in Melbourne, Australia.
Interviewees include: James Fodor, Cameron Ashendale, Alice Knight, Rick Barker, Chris Watkins, Francesco Orsenigo, Chris Guest, Elida Radig & Sirius
Filmed at a Darwin Day picnic in Melbourne Australia.
The picnic was put on by these groups: - Rationalist Society of Australia - Australian Skeptics Victorian Branch - Humanist Society of Victoria - Progressive Atheists
Kind regards, Adam Ford - Science, Technology & the Future - #SciFuture - http://scifuture.orgRobin Hanson - Grabby Aliens - How Far Away Are Expansionist Aliens?Science, Technology & the Future2020-12-28 | Robin discusses his new model for predicting how far (in space & time) expansionist (or aggressive or grabby (GC)) alien civs might be. He describes his median estimate that (conditional on our survival and continued expansionist growth), we will meet this kind of alien in approximately 500 million years. - it's difficult to believe civs are very rare since our abiogenesis and rise to civ appeared somewhat early in the history of the universe
In sum, it is possible to estimate how far away in space and time are the nearest aliens, if one is willing to make these assumptions: - It is worth knowing how far to grabby alien civs (GCs), even if that doesn’t tell about other alien types. - Try-try parts of the great filter alone make it hard for any one oasis to birth an GC in 14 billion years. - We can roughly estimate the speed at which GCs expand, and the number of hard try-try steps. - Earth is not now within the sphere of control of a GC. - Earth is at risk of birthing a GC soon, making today’s date a sample from GC time origin distribution.
Please also check out the other interviews with Robin Hanson on the Great Filter and Burning the Cosmic Commons.
Kind regards, Adam Ford - Science, Technology & the Future - #SciFuture - http://scifuture.orgJohn Horgan - Pay Attention: Sex, Death, and ScienceScience, Technology & the Future2020-12-27 | Science writer John Horgan discusses AI, consciousness, solipsism, intellectual humility, the fermi paradox and his new book 'Pay Attention: Sex, Death & Science' - which I found myself enjoying a lot.Book: https://www.amazon.com.au/Pay-Attention-Sex-Death-Science/dp/1949597091
01:08 Science Communication & the information deficit explanation of public hostility / skepticism towards science 05:11 The tension between sober-mindedness in science and the need to translate science to the public in an engaging way. Hype and exaggeration in science. 08:40 The End of Science - has science reached an era of diminishing returns? 11:13 Is it possible for science to describe things like the abiogenesis of life on earth or consciousness? 15:34 Evidence of fossilized microbes from mars? 17:44 The Great Filter? Why can’t we see any evidence of other galactic expansionist alien civs? 20:50 Humans seeming need to fill in the gaps with mystical or highly speculative explanations. Epistemic humility. 25:17 Covering an AI Ethics conference 30:22 Can we automate ethics? 32:05 The term singularity 35:15 Basic AI Drives 36:33 What would SI do once it had enough resources? 40:20: Progress in AI. Was part of the reason for an AI winter the focus on symbolic AI? AI as a project to understand ourselves. 47:09 Do we need sentient machines to achieve highly capable machines? 49:08 Can consciousness be measured? Integrated Information Theory (IIT) and panpsychism. 53:20 Solipsism and epistemic humility 57:24 The Hogan sisters and split brain experiments 01:03:01 John Horgan questions his own ‘End of Science’ thesis 01:01:12 Perhaps we need some kind of machine cognition to make certain kinds of progress in science. Denis Overby thinks AI will make progress with a grand unified theory. 01:01:53 AlphaFold - protein folding prediction 01:03:28 AI as a black box - extraordinarily powerful opaque AIs & the replication crisis in AI 01:07:44 Pay Attention: Sex, Death, and Science 01:09:43 A book on the horizon on quantum mechanics
Kind regards, Adam Ford - Science, Technology & the Future - #SciFuture - http://scifuture.orgJames Barrat - Our Final Invention RevisitedScience, Technology & the Future2020-12-24 | Merry Christmas! James Barrat is a documentary filmmaker, speaker, and author of the nonfiction book Our Final Invention: Artificial Intelligence and the End of the Human Era. We discuss the progress in financing and engineering AI - the issues of AI safety are still just as relevant as they were in 2013.James thinks we need policy reform - if anyone wants to make a difference, run for cabinet or alert those in cabinet of the issues around AI safety. We also discuss if AI can help with the AI safety problem itself - in relation to whether AI can understand.
Book 'Our Final Invention' : https://www.amazon.com.au/Our-Final-Invention-James-Barrat/dp/0312622376https://en.wikipedia.org/wiki/James_BarratMany thanks for tuning in!
Kind regards, Adam Ford - Science, Technology & the Future - #SciFuture - http://scifuture.orgRoss Gayler - Data Science for Credit Assessment - Monash UniversityScience, Technology & the Future2020-12-09 | interviewed by Adam Ford for Monash University
Kind regards, Adam Ford - Science, Technology & the Future - #SciFuture - http://scifuture.orgYuri Deigin - Defeating AgingScience, Technology & the Future2020-12-06 | Yuri Deigin, MBA is a serial biotech entrepreneur, longevity research evangelist and activist, and a cryonics advocate. He is an expert in drug development and venture investments in biotechnology and pharmaceuticals. He is the CEO at Youthereum Genetics and the Vice President at Science for Life Extension Research Support Foundation. http://youthereum.ca
Yuri has a track record of not only raising over $20 million for his previous ventures but also initiating and overseeing 4 clinical trials and several preclinical studies, including studies in transgenic mice.
At Youthereum Genetics, Yuri is currently leading a project dedicated to developing an epigenetic rejuvenation gene therapy, as intermittent epigenetic partial reprogramming demonstrated great experimental results in mice: it extended their lifespan by up to 50%.
His life goal is to do everything possible to minimize human suffering from various diseases, especially terminal age-related diseases such as cancer, Alzheimer’s, and cardiovascular disease and to help humanity eradicate them. As an activist, blogger, and speaker, he is conveying the magnitude of human suffering these diseases cause, as they take over 100,000 lives each day. As a biotech entrepreneur, Yuri is doing his modest part by putting together projects that could yield such therapies, splitting his time between Toronto and Moscow.
He believes that one day humanity will cure all such diseases, and he wants to do whatever he can to hasten that day.
Since 2013, Yuri also serves as the Vice President of the nonprofit Foundation, Science for Life Extension, whose goal is the popularization of the fight against age-related diseases. To further this cause, Yuri frequently blogs, speaks, writes op-ed pieces, and participates in various TV and radio shows. At the Science for Life Extension Foundation, Yuri is helping the Foundation create and implement social change strategies to create public awareness that aging is a curable disease. He is also working on initiating intergovernmental dialog and public hearings about including aging in WHO’s ICD-11.
Previously, Yuri was the COO and Managing Director at Pharma Bio in Moscow for almost 7 years. From 2015 to 2017, Yuri was the Vice President of Business Development at Manus Pharmaceuticals in Toronto, Canada where he worked on raising funding and forming strategic partnerships to develop breakthrough peptide compounds aimed at preventing Alzheimer’s disease. Before that, he was the VP of Business Development at Peptos Pharma in Moscow.
Kind regards, Adam Ford - Science, Technology & the Future - #SciFuture - http://scifuture.orgAlexander Fedintsev: Accumulation of Damage to Long-Living MacromoleculesScience, Technology & the Future2020-12-04 | Interview with Alexander Fedintsev conducted at the Undoing Aging conference, Berlin, 2019.
Alexander Fedintsev is a scientist and machine learning engineer. His scientific background lies in the field of bioinformatics, statistics, and machine learning. Alexander earned his M.S. in computer science from the National Research University "Moscow Power Engineering Institute".
Alexander worked in the Institute of Antimicrobial Chemotherapy as a bioinformatician. He also collaborated with professor Alexey Moskalev's lab on aging research. After quitting academia, Alexander switched to machine learning engineering however he continued collaborating on aging research with professor Moskalev.
He developed a highly accurate non-invasive biomarker of aging based on markers of the cardiovascular system. Now his research interest is mainly focused on the role of extracellular matrix (ECM) in the aging process. He and professor Moskalev recently suggested treating non-enzymatic modifications of long-living proteins (mostly, in the ECM) as a 10th hallmark of aging.
Kind regards, Adam Ford - Science, Technology & the Future - #SciFuture - http://scifuture.orgJamais Cascio - are alternative facts created to be believed, or to cast doubt on reality?Science, Technology & the Future2020-11-28 | Futurist Jamais Cascio on how fake news, alternative facts, and #misinformation is evolving - as well as frameworks to help us wade through the fake news and make sense of huge amounts of chaos in a rapidly changing world.
“The power and diversity of very low-cost technologies allowing unsophisticated users to create believable ‘alternative facts’ is increasing rapidly. It’s important to note that the goal of these tools is not necessarily to create consistent and believable alternative facts, but to create plausible levels of doubt in actual facts. The crisis we face about ‘truth’ and reliable facts is predicated less on the ability to get people to believe the wrong thing as it is on the ability to get people to doubt the right thing. The success of Donald Trump will be a flaming signal that this strategy works, alongside the variety of technologies now in development (and early deployment) that can exacerbate this problem. In short, it’s a successful strategy, made simpler by more powerful information technologies.”
We speak a bit about AI language modelling, the fact that it doesn't understand stuff, and the possibility of creating AI that actually understands stuff. Highlighted in the news recently language modelling (i.e. GPT3) is being used to help generate fake news.. Interestingly 'a college student used GPT-3 to write fake blog posts and ended up at the top of Hacker News': theverge.com/2020/8/16/21371049/gpt3-hacker-news-ai-blog
Frameworks for helping understand wickedly complex information - VUCA (Volatile, Uncertain, Complex, and Ambiguous) vs BANI (Brittle, Anxious, Nonlinear, and Incomprehensible):
Jamais’ BANI piece on Medium “Facing the Age of Chaos": medium.com/@cascio/facing-the-age-of-chaos-b00687b1f51d Quote: “There has always been uncertainty and complexity in the world, and we have devised reasonably effective systems to figure out and adapt to this everyday disorder. From weighty institutions like “law” and “religion” to habituated norms and values, even to ephemeral business models and political strategies, much of what we think of as composing “civilization” is ultimately a set of cultural implements that allow us to domesticate change. If we can make disruptive processes understandable, we hope, maybe we can keep their worst implications in check.”
And on BANI “It doesn’t have to be that way. The BANI framework offers a lens through which to see and structure what’s happening in the world. At least at a surface level, the components of the acronym might even hint at opportunities for response: brittleness could be met by resilience and slack; anxiety can be eased by empathy and mindfulness; nonlinearity would need context and flexibility; incomprehensibility asks for transparency and intuition. These may well be more reactions than solutions, but they suggest the possibility that responses can be found.”
Kind regards, Adam Ford - Science, Technology & the Future - #SciFuture - http://scifuture.orgJames Hughes - Trumpism: Political Division & DistrustScience, Technology & the Future2020-11-06 | James Hughes is interviewed by Adam Ford on the 2020 #Biden / #Trump election - the political divide, attraction to narcissism, distrust & conspiracy theories, technoprogressive & transhumanist goals, technological progress (i.e. GPT3, deepfake videotech) & fake news, how people think about politics, and issues moving ahead. Bio: en.wikipedia.org/wiki/James_Hughes_(sociologist)
Discussion points: 00:28 Demonstrations on counting every vote #everyvotecounts 01:25 Contentious issues in #politics - de-fund the police & socialism 12:43 Strong man politics and the attraction to narcissism #toxicmasculinity 15:20 The appointment of Amy Coney Barret 23:18 Appealing to transhumanist or more sophisticated trump supporters - how? 29:16 Ivanka vs Donald as the next republican candidate? Dog whistling vs bull horning 31:08 Will Trump try for 2024? 33:30 A progressive agenda and a republican senate 39:39 Politics & transhumanist goals 46:04 Distrust in the scientific enterprise & science in general. Conspiracy theories 48:06 An update on what IEET are doing 53:09 Contrasting positions of Biden and Trump - Unity vs Division 59:41 Kamala Harris may take over from Biden as president by 2024 03:09 Fake news and emerging technology like #GPT3 & #deepfakes 10:02 Things to do over the next 4 years and beyond to achieve technoprogressive / transhumanist goals
Kind regards, Adam Ford - Science, Technology & the Future - #SciFuture - http://scifuture.orgStevan Harnad - Other Minds: Ethics and Animal WelfareScience, Technology & the Future2020-10-15 | Dr Stevan Harnad (Cognitive Scientist and author of The Symbol Grounding Problem) discusses animal welfare, ethics and the 'other minds' problem - The only feelings we can feel are our own. When it comes to the feelings of others, we can only infer them, based on their behavior — unless they tell us.. In trying to make do with inferences from behavior, the behavioral sciences have been at pains to avoid “anthropomorphism,”.. our human mind-reading capacities – biologically evolved for care-giving to our own progeny as well as for social interactions with our kin and kind – are really quite acute.. But what about other species? And whose problem is the “Other-Minds Problem”? Philosophers think of it as our problem, in making inferences about the minds of our own conspecifics. But when it comes to other species, and in particular our interactions with them, surely it is their problem if we misinterpret or fail to detect what or whether they are feeling.
00:06 On the urgency of reducing suffering. Is ethics merely an aesthetic? 01:29 Classical utilitarianism. Are happiness and suffering morally symmetrical? 04:48 Negative utilitarianism 05:32 Pin prick arguments 11:37 The problem of other minds, skepticism and the Turing test (chat-bot vs robotic Turing tests) 19:18 GPT3 & Symbol Grounding - while a great text generation, it doesn't understand stuff. 27:17 Stevan Harnad's personal reasons for first becoming a vegetarian and later a vegan 31:48 Are there health issues with being a vegan? 32:21 Informing and sensitizing people about ethical food consumption 33:17 Are there nutritional benefits only found in animal products? 37:22 The ethics of animal experimentation. Conflicts of moral vital interests. 39:17 Covid19 - the cause of almost all pandemics is zoonotic - where humans force animals into each others habitats making it easier for pathogens to jump between species (inc humans) 44:48 Pets 49:45 Clean meat - duplicate the taste without the suffering. 51:36 Strategies for reducing animal suffering in industry: 1) Sensitize to horrors and non-necessity, 2) develop clean meat or alternatives to meat 3) scare tactics (i.e. regarding pandemics and environmental issues) 54:32 Disinformation campaigns in the meat industry - Ag-gag en.wikipedia.org/wiki/Ag-gag 58:01 GPT3 a tool for generating fake news? #GPT3 can't understand stuff without symbol grounding.
References "Other bodies, other minds: A machine incarnation of an old philosophical problem" Dr Stevan Harnad - philpapers.org/rec/HAROBO-2 'Other Minds' - Plato Stanford: https://plato.stanford.edu/entries/other-minds/
Kind regards, Adam Ford - Science, Technology & the Future - #SciFuture - http://scifuture.orgBen Goertzel - AGI, GPT3, Understanding & Meaning GenerationScience, Technology & the Future2020-10-13 | Ben Goertzel is interviewed by Adam Ford on the current state of play on the road to AGI - the need for AI to generate concise abstract representations. Amazingly funky text generation using transformer networks (GPT-3 as a popular example). What is missing in AI? Symbol grounding, the meaning of meaning and understanding, proto- transfer learning and more.
01:14 Is #GPT3 on the direct path to #AGI? 04:37 Interesting and crazy output of GPT3 - Conjuring Philip K Dick through transformer neural net experimentation 09:26 Faking understanding .. Propensity for GPT3 or other transformer ANNs to produce gibberish some of the time reduces practical real world use. 13:16 GPT3 training data contains distillations of human understanding. Difficulties in developing generative document summarizers. 15:33 Occam's Razor & whether adding vastly more amounts of parameters make a remarkable difference in transformer network capability 23:46 Transformer models in music 27:13 What's missing in AI? Symbol grounding and abstract representation 30:34 Minimum requirements for symbol grounding in AGI - need for systems that can generate compact abstract representations 34:57 Paper: Symbol Grounding via Chaining of Morphisms arxiv.org/abs/1703.04368 39:52 Paper: Grounding Occam's Razor in a Formal Theory of Simplicity arxiv.org/abs/2004.05269 46:12 OpenCog Hyperon wiki.opencog.org/w/Hyperon 50:44 What is meaning? Are compact abstract representations required for meaning generation? 54:51 What are symbols? How are they represented in transformer networks? How would they ideally be represented in an AGI system? 59:08 Understanding, compression and Occam's Razor - and the need for compact abstract representations in order to achieve generalization 1:03:08 Integrating large transformer ANNs - a modular approach 1:08:43 Proto transfer learning using concise abstract representations 1:12:15 What's missing in AI atm? What's on the horizon? 1:14:43 Other AGI projects - "Replicode: A Constructivist Programming Paradigm and Language" - Kristinn R. Thórisson: zenodo.org/record/7009 1:14:43 Graph processing units are here (the singularity must be near!) 1:20:28 Why people think it's impossible to achieve AGI this century 1:24:46 The prospect of living to see AGI occur 1:26:04 Superintelligent singleton hard takeoffs and race conditions between competing AGI projects 1:28:49 Centralized AGI development vs it being in the hands of a teaming mass of unorganized humans 1:30:14 The Trump/Biden presidential elections 1:31:28 Looking forward to an AGI 'RObama' run government
Kind regards, Adam Ford - Science, Technology & the Future - #SciFuture - http://scifuture.orgJoscha Bach - GPT-3: Is AI Deepfaking Understanding?Science, Technology & the Future2020-09-11 | Joscha Bach on GPT-3, achieving AGI, machine understanding and lots more 02:40 What's missing in AI atm? Unified coherent model of reality 04:14 AI systems like GPT-3 behave as if they understand - what's missing? 08:35 Symbol grounding - does GPT-3 have it? 09:35 GPT-3 for music generation, GPT-3 for image generation, GPT-3 for video generation 11:13 GPT-3 temperature parameter. Strange output? 13:09 GPT-3 a powerful tool for idea generation 14:05 GPT-3 as a tool for writing code. Will GPT-3 spawn a singularity? 16:32 Increasing GPT-3 input context may have a high impact 16:59 Identifying grammatical structure & language 19:46 What is the GPT-3 transformer network doing? 21:26 GPT-3 uses brute force, not zero-shot learning, humans do ZSL 22:15 Extending the GPT-3 token context space. Current Context = Working Memory. Humans with smaller current contexts integrate concepts over long time-spans 24:07 GPT-3 can't write a good novel 25:09 GPT-3 needs to become sensitive to multi-modal sense data - video, audio, text etc 26:00 GPT-3 a universal chat-bot - conversations with God & Johann Wolfgang von Goethe 30:14 What does understanding mean? Does it have gradients (i.e. from primitive to high level)? 32:19 (correlation vs causation) What is causation? Does GPT-3 understand causation? Does GPT-3 do causation? 38:06 Deep-faking understanding 40:06 The metaphor of the Golem applied to civ 42:33 GPT-3 fine with a person in the loop. Big danger in a system which fakes understanding. Deep-faking intelligible explanations. 44:32 GPT-3 babbling at the level of non-experts 45:14 Our civilization lacks sentience - it can't plan ahead 46:20 Would GTP-3 (a hopfield network) improve dramatically if it could consume 1 to 5 trillion parameters? 47:24 GPT3: scaling up a simple idea. Clever hacks to formulate the inputs 47:41 Google GShard with 600 billion input parameters - Amazon may be doing something similar - future experiments 49:12 Ideal grounding in machines 51:13 We live inside a story we generate about the world - no reason why GPT-3 can't be extended to do this 52:56 Tracking the real world 54:51 MicroPsi 57:25 What is computationalism? What is it's relationship to mathematics? 59:30 Stateless systems vs step by step Computation - Godel, Turing, the halting problem & the notion of truth 1:00:30 Truth independent from the process used to determine truth. Constraining truth that which can be computed on finite state machines 1:03:54 Infinities can't describe a consistent reality without contradictions 1:06:04 Stevan Harnad's understanding of computation 1:08:32 Causation / answering 'why' questions 1:11:12 Causation through brute forcing correlation 1:13:22 Deep learning vs shallow learning 1:14:56 Brute forcing current deep learning algorithms on a Matrioshka brain - would it wake up? 1:15:38 What is sentience? Could a plant be sentient? Are eco-systems sentient? 1:19:56 Software/OS as spirit - spiritualism vs superstition. Empirically informed spiritualism 1:23:53 Can we build AI that shares our purposes? 1:26:31 Is the cell the ultimate computronium? The purpose of control is to harness complexity 1:31:29 Intelligent design 1:33:09 Category learning & categorical perception: Models - parameters constrain each other 1:35:06 Surprise minimization & hidden states; abstraction & continuous features - predicting dynamics of parts that can be both controlled & not controlled, by changing the parts that can be controlled. Categories are a way of talking about hidden states. 1:37:29 'Category' is a useful concept - gradients are often hard to compute - so compressing away gradients to focus on signals (categories) when needed 1:38:19 Scientific / decision tree thinking vs grounded common sense reasoning 1:40:00 Wisdom/common sense vs understanding. Common sense, tribal biases & group insanity. Self preservation, dunbar numbers 1:44:10 Is g factor & understanding two sides of the same coin? What is intelligence? 1:47:07 General intelligence as the result of control problems so general they require agents to become sentient 1:47:47 Solving the Turing test: asking the AI to explain intelligence. If response is an intelligible & testable implementation plan then it passes? 1:49:18 The term 'general intelligence' inherits it's essence from behavioral psychology; a behaviorist black box approach to measuring capability 1:52:15 How we perceive color - natural synesthesia & induced synesthesia 1:56:37 The g factor vs understanding 1:59:24 Understanding as a mechanism to achieve goals 2:01:42 The end of science? 2:03:54 Exciting currently untestable theories/ideas (that may be testable by science once we develop the precise enough instruments). Can fundamental physics be solved by computational physics? 2:07:14 Quantum computing. Deeper substrates of the universe that runs more efficiently than the particle level of the universe? 2:10:05 The Fermi paradox 2:12:19 Existence, death and identity constructionDebate: AI - Artilect War or Utopia? Josh Hall vs Hugo de GarisScience, Technology & the Future2020-07-30 | Debate - Josh Hall vs Hugo de Garis on whether AI will result in a Utopia or War. - Josh's Position: Josh takes the posittion in this debate that the rise of artificial intelligence levels will create a utopia for humanity. - Hugo's Position: Hugo takes the opposite position, namely that the rise of godlike massively intelligent machines will be catastrophic for humanity, leading to the worst, most passionate war humanity has ever known, using late 21st century weapons, killing billions of people.
Kind regards, Adam Ford - Science, Technology & the FutureMusing on Understanding & AI - Hugo de Garis, Adam Ford, Michel de HaanScience, Technology & the Future2020-07-27 | Started out as an interview ended up being a discussion between Hugo de Garis and (off camera) Adam Ford + Michel de Haan. 00:11 The concept of understanding under-recognised as an important aspect of developing AI 00:44 Re-framing perspectives on AI - the Chinese Room argument - and how can consciousness or understanding arise from billions of seemingly discreet neurons firing? (Should there be a binding problem of understanding similar to the binding problem of consciousness?) 04:23 Is there a difference between generality in intelligence and understanding? (and extentionally between AGI and artificial understanding?) 05:08 Ah Ha! moments - where the penny drops - what's going on when this happens? 07:48 Is there an ideal form of understanding? Coherence & debugging - ah ha moments 10:18 Webs of knowledge - contextual understanding 12:16 Early childhood development - concept formation and navigation 13:11 The intuitive ability for concept navigation isn't complete Is the concept of understanding a catch all? 14:29 Is it possible to develop AGI that doesn't understand? Is generality and understanding the same thing? 17:32 Why is understanding (the nature of) understanding important? Is understanding reductive? Can it be broken down? 19:52 What would be the most basic primitive understanding be? 22:11 If (strong) AI is important, and understanding is required to build (strong) AI, what sorts of things should we be doing to make sense of understanding? Approaches - engineering, and copy the brain 24:34 Is common sense the same thing as understanding? How are they different? 26:24 What concepts do we take for granted around the world - which when strong AI comes about will dissolve into illusions, and then tell us how they actually work under the hood? 27:40 Compression and understanding 29:51 Knowledge, Gettier problems and justified true belief. Is knowledge different from understanding and if so how? 31:07 A hierarchy of intel - data, information, knowledge, understanding, wisdom 33:37 What is wisdom? Experience can help situate knowledge in a web of understanding - is this wisdom? Is the ostensible appearance of wisdom necessarily wisdom? Think pulp remashings of existing wisdom in the form of trashy self-help literature. 35:38 Is understanding mapping knowledge into a useful framework? Or is it making accurate / novel predictions? 36:00 Is understanding like high resolution carbon copy like models that accurately reflect true nature or a mechanical process? 37:04 Does understanding come in gradients of topologies? Is there degrees or is it just on or off? 38:37 What comes first - understanding or generality? 40:47 Minsky's 'Society of Mind' 42:46 Is vitalism alive in well in the AI field? Do people actually think there are ghosts in the machines? 48:15 Anthropomorphism in AI literature 50:48 Deism - James Gates and error correction in super-symmetry 52:16 Why are the laws of nature so mathematical? Why is there so much symmetry in physics? Is this confusing the map with the territory? 52:35 The Drake equation, and the concept of the Artilect - does this make Deism plausible? What about the Fermi Paradox? 55:06 Hyperintelligence is tiny - the transcention hypothesis - therefore civs go tiny - an explanation for the fermi paradox 56:36 Why would *all* civs go tiny? Why not go tall, wide and tiny? What about selection pressures that seem to necessitate cosmic land grabs? 01:01:52 The Great Filter and the The Fermi Paradox 01:02:14 Is it possible for an AGI to have a deep command of knowledge across a wide variety of topics/categories without understanding being an internal dynamic? Is the turing test good enough to test for understanding? What kinds of behavioral tests could reliably test for understanding? (Of course without the luxury of peering under the hood) 01:03:09 Does AlphaGo understand Go, or DeepBlue understand chess? Revisiting the Chinese Room argument. 01:04:23 More on behavioral tests for AI understanding. 01:06:00 Zombie machines - David Chalmers Zombie argument 01:07:26 Complex enough algorithms - is there a critical point of complexity beyond which general intelligence likely emerges? Or understanding emerges? 01:08:11 Revisiting behavioral 'turing' tests for understanding 01:13:05 Shape sorters and reverse shape sorters 01:14:03 Would slightly changing the rules of Go confuse AlphaGo (after it had been trained)? Need for adaptivity - understanding concept boundaries, predicting where they occur, and the ability to mine outwards from these boundaries... 01:15:11 Neural nets and adaptivity 01:16:41 AlphaGo documentary - worth a watch. Progresses in AI challenges human dignity which is a concern, but the DeepMind and the AlphaGo documentary seemed to be respectful. Can we manage a transition from human labor to full on automation while preserving human dignity?
Filmed in the Dandenong Ranges in Victoria, Australia.
Many thanks for watching!Posthumanism - Pramod NayarScience, Technology & the Future2020-07-24 | Interview with Pramod K. Nayar on posthumanism 'as both a material condition and a developing philosophical-ethical project in the age of cloning, gene engineering, organ transplants and implants'. The book 'Posthumanism' by Pramod Nayar: amzn.to/2OQEA8z Rise of the posthumanities article: bit.ly/32Q67Pm
0:00 Intro / What got Pramod interested in posthuman studies? 04:16 Defining the terms - what is posthumanism? Cultural framing of natural vs unnatural. Posthumanism is not just bodily or mental enhancement, but involves changing the relationship between humans, non-human lifeforms, technology and non-living matter. Displacement of anthropocentrism. 08:01 Anthropocentric biases inherited from enlightenment humanist thinking and human exceptionalism. The formation of the transhumanist declaration with part of it focusing on the human perspective, with point 7 of the declaration focusing on the well-being of all sentience. The important question of empathy - not limiting it to the human species. The issue of empathy being a good lunching pad for further conversations between the transhumists and the posthumanists. humanityplus.org/philosophy/transhumanist-declaration 11:10 Difficulties in getting everyone to agree on cultural values. Is a utopian ideal posthumanist/transhumanist society possible? 13:25 Collective societies, hive minds, borganisms. Distributed cognition, the extended mind hypothesis, cognitive assemblages, traditions of knowledge sharing. 16:58 Does the humanities need some form of reconfiguration to shift it towards something beyond the human? Rejecting some of the value systems that enlightenment humanism claimed to be universal. Julian Savulescu's work on moral enhancement. 20:58 Colonialism - what is it? 21:57 Aspects of enlightenment humanism that the critical posthumanists don't agree with. But some believe the poshumanists to be enlightenment haters that reject rationality - is this accurate? 24:33 Trying to achieve agreement on shared human values - is vulnerability rather than dignity a usable concept that different groups can agree with? 26:37 The idea of the monster - peoples fear of what they don't understand. Thinking past disgust responses to new wearable technologies and more radical bodily enhancements. 29:45 The future of posthuman morphology and posthuman rights - how might emerging means of upgrading our bodies / minds interfere with rights or help us re-evaluate rights? 33:42 Personhood beyond the human. 35:11 Should we uplift non-human animals? Animals as moral patients becoming moral actors through uplifting? Also once Superintelligent AI is developed, should it uplift us? The question of agency and aspiration - what are appropriate aspirations for different life forms? Species enhancement and Ian Hacking's idea of 'Making up people' - classification and how people come to inhabit the identities that exist at various points in history, or in different environments. lrb.co.uk/the-paper/v28/n16/ian-hacking/making-up-people 38:10 Measuring happiness - David Pearce's idea of eliminating suffering and increasing happiness through advanced technology. What does it mean to have welfare or to flourish? Should we institutionalise wellbeing, a gross domestic happiness, world happiness index? 40:27 Anders Sandberg asks: Transhumanism and posthumanism often do not get along - transhumanism commonly wears its enlightenment roots on its sleeve, and posthumanism often spends more time criticising the current situation than suggesting an out of it. Yet there is no fundamental reason both perspectives could not simultaneously get what they want: a post-human posthumanist concept of humanity and its post-natural environment seem entirely possible. What is Nayar's perspective on this win-win vision? 44:14 The postmodern play of endless difference and relativism - what is the good and bad of postmodernism on posthumanist thinking? 47:16 What does postmodernism have to offer both posthumanism and transhumanism? 49:17 Thomas Kuhn's idea of paradigm changes in science happening funeral by funeral. 58:58 - How has the idea of the singularity influenced transhumanist and posthumanist thinking? Shift's in perspectives to help us ask the right questions in science, engineering and ethics in order to achieve a better future society. 1:01:55 - What AI is good and bad at today. Correlational thinking vs causative thinking. Filling the gaps as to what's required to achieve 'machine understanding'. 1:03:26 - Influential literature on the idea of the posthuman - especially that which can help us think about difference and 'the other' (or the non-human) (Octavia Butler, James Hughes, Anders Sandberg, Gary Harper, Julian Savulescu, Mark Tenanbaum)
Many thanks for watching!
Consider supporting SciFutureStelarc - Contingent & Contestable FuturesScience, Technology & the Future2020-07-07 | In the age of the chimera, uncertainty and ambivalence generate unexpected anxieties. The dead, the near-dead, the brain dead, the yet to be born, the partially living and synthetic life all now share a material and proximal existence, with other living bodies, microbial life, operational machines and executable and viral code. Digital objects proliferate, contaminating the human biome. Bodies become end effectors for other bodies in other places and for machines elsewhere, generating interactive loops and recursive choreographies. There was always a ghost in the machine, but not as a vital force that animates but rather as a fading attestation of the human.
STELARC – CONTINGENT AND CONTESTABLE FUTURES: DIGITAL NOISE, GLITCHES & CONTAMINATIONS
Stelarc experiments with alternative anatomical architectures. His performances incorporate Prosthetics, Robotics, VR and Biotechnology. He is presently surgically constructing and augmenting an ear on his arm. In 1996 he was made an Honorary Professor of Art and Robotics, Carnegie Mellon University and in 2002 was awarded an Honorary Doctorate of Laws by Monash University. In 2010 he was awarded the Ars Electronica Hybrid Arts Prize. In 2015 he received the Australia Council’s Emerging and Experimental Arts Award. In 2016 he was awarded an Honorary Doctorate from the Ionian University, Corfu. His artwork is represented by Scott Livesey Galleries, Melbourne. www.stelarc.org
Kind regards, Adam Ford - Science, Technology & the FutureiGem Project - A Peptide Expression PlatformScience, Technology & the Future2020-06-22 | The Peptide Expression Platform project was part of the iGem competition (International Genetically Engineered Machine).
Speakers: Michelle Chayeb and Hiwot Kelemwok
This presentation was held at H+ @Melbourne 2011 hosted by #SciFuture.
Kind regards, Adam Ford - Science, Technology & the FutureBlack Swans, Chaos, EmergenceScience, Technology & the Future2020-06-22 | Panelists Tony Smith, Meredith Doig and Slade Beard discuss the perils of prediction.
The panel was held at H+ @Melbourne 2011, hosted by #SciFuture.
Kind regards, Adam Ford - Science, Technology & the FutureAndrew Perry - Open Source BiotechScience, Technology & the Future2020-06-22 | Andrew Perry is a Research Software Specialist at the Monash Bioinformatics Platform - he solves research computing problems, primarily around capturing, organising, preserving and analysing large volumes of primary research data.
Kind regards, Adam Ford - Science, Technology & the FutureGamifying Biotech - A Rapid Prototyping WorkshopScience, Technology & the Future2020-06-22 | Equipped with rapid prototyping kits audience members brainstormed interesting and sometimes hilarious biotechnology solutions to global problems :).
Facilitated by Will Donovan and Jeremy Nagel
This rapid prototying exercise was facilitated at H+ @Melbourne 2011 hosted by #SciFuture.
Kind regards, Adam Ford - Science, Technology & the FutureAndy Gelme - Internet of ThingsScience, Technology & the Future2020-06-22 | Andy Gelme is working on connecting as much of the real world to the Internet as possible ... and providing natural user interfaces for interaction.
Most of his career has involved commercial R&D, typically with emerging technologies. Usually, staying one step ahead and making technology choices that are on the cusp of becoming mainstream. Often, undertaking system architecture and technical team leader roles on projects that combine software and hardware.
Specialties: System architecture, technical team lead, distributed systems and embedded systems prototyping, design and implementation. Current focus: Drones, robotics, A.I / Machine Learning, video processing