CSIROs Data61
Simulation of air
updated
AI could be an incredibly powerful tool to fast-track our collective progress – from drought preparedness and advanced climate modelling, to supply chain and energy efficiency.
You'll hear from a panel of experts including:
- AIIA CEO, Simon Bush
- AIIA Chair of the ESG Policy Advisory Network, Patrick Mooney
- GreenSquareDC Founder and CEO, Walt Coulston
- Responsible AI at Scale Think Tank Lead Judy Slatyer
- Accenture Managing Director - Data & AI, Fergal Murphy
Learn the significance of placing humans at the heart of AI development, and how human-centred AI design fosters transparency, ethics, and adaptability in systems.
Hosted by the National AI Centre's (NAIC) Responsible AI Network (RAIN) and Standards Australia and held during Australia’s AI Month (Nov 15-Dec 15), this hour-long webinar provides an introduction on how AI is driven by data and access to data.
Dr Ian Oppermann, NSW Government’s Chief Data Scientist, will moderate the panel of experts who will discuss:
· What is data-centric AI?
· How can organisations better access, manage and use data for AI applications?
· How do standards play a role in data?
Speakers include:
· A/Professor Fatemeh Vafaee (Deputy Director of UNSW Data Science Hub, Program Lead of Med-Tech.AI, and Founder of OmniOmics.AI Pty Ltd)
· A/Professor Roman Marchant (Head of Research, UTS Human Technology Institute)
· Professor Kimberlee Weatherall (University of Sydney and ARC Centre of Excellence on Automated Decision-Making and Society)
· Dr Thierry Rakotoarivelo (Senior Research Scientist, Information Security and Privacy Group Software and Computational Systems Program Data61, CSIRO)
The panel discussion will be followed by a live audience Q&A.
Coordinated by RAIN and CSIRO’s Data61, this workshop examines the cutting-edge AI-powered programs Australia’s national science agency is designing to prevent digital deception.
Research scientists Dr Jieshen Chen and Dr Kristen Moore will demonstrate the tools they’re developing to counter misinformation and deceptive behaviour online.
You'll walk away with a well-rounded understanding of AI's role in both the challenge and the solution, leaving them informed and able to safely navigate the modern online landscape.
Hosted by Aurelie Jacquet CIPPE (Chair of Standards Australia AI committee IT-043) and featuring panelists Dr Kate Bower (UTS Human Technology Institute), Dr Sarah Dods (GHD Digital), Dr Eric Q. (franklin.ai), and Dr Jonathan Earthly (Llyod's Register), this workshop is an invaluable opportunity to learn ethical AI from world-leading AI experts.
This one-hour session will:
- Define responsible and inclusive AI;
- Examine challenges in implementing responsible and inclusive AI;
- Demonstrate potential solutions to overcome these challenges, including human centred design; and
- Explore existing standards that can support the deployment of responsible and inclusive AI.
Coordinated by the Responsible AI Network and Gradient Institute, this webinar will help your business responsibly navigate the fast-paced AI environment while meeting customer expectations.
AI systems developed without adequate controls can have unintended consequences that can significantly damage company reputation and customer loyalty, making responsible implementation crucial to success.
AI practitioners and authors of Implementing Australia’s AI Ethics Principles report Alistair Reid and Simon O’Callaghan will unpack a selection of effective practices including:
Impact elicitation
Data curation
Fairness measures
System assessment
Your hosts will explain how each practice works, where they fit in an organisational context, and examples of the tools and guidelines to implement them.
- Key AI concepts and terminology
- The stages of the AI lifecycle
- The standards development process; and
- Key standards, both existing and in development, that are important to know about.
Hosted by the Responsible AI Network (RAIN) and Standards Australia, this workshop features a range of speakers across industry, government and academia, including:
- NSW Government Chief Data Scientist and University of Technology Sydney Industry Professor Ian Oppermann
- Virtual Ink Australia Director Harm Ellens
- Microsoft's Regional Standards Manager Geoff Clarke
- Western Sydney University Senior Lecturer Dr Rosalind Wang
- Independent AI advisor and AICD representative Dr Ali Akbari
- Key AI concepts and terminology
- The stages of the AI lifecycle
- The standards development process; and
- Key standards, both existing and in development, that are important to know about.
Hosted by the Responsible AI Network (RAIN) and Standards Australia, this workshop features a range of speakers across industry, government and academia, including:
- NSW Government Chief Data Scientist and University of Technology Sydney Industry Professor Ian Oppermann
- Virtual Ink Australia Director Harm Ellens
- Microsoft's Regional Standards Manager Geoff Clarke
- Western Sydney University Senior Lecturer Dr Rosalind Wang
- Independent AI advisor and AICD representative Dr Ali Akbari
Hosted by the Responsible AI Network (RAIN) in partnership with Gradient Institute, in this one hour workshop you'll learn:
- How LLMs work
- How businesses are using LLMs
- The benefits and challenges of LLMs
- The various ways businesses can manage their operation to mitigate risk
You'll be guided through the process by Gradient Institute CEO Bill Simpson-Young and Chief Practitioner Lachlan McCalman, with NAIC Strategic Engagement Manager Rita Arrigo hosting.
The first event in this series, 'Introduction to Responsible AI Engineering’ will walk you through the development and application of responsible AI.
Hosted by NAIC Strategic Engagement Manager Rita Arrigo, Responsible AI Science Team Lead at CSIRO’s Data61 and Women in AI Awards 2023 finalist Dr Qinghua Lu, and 2019’s ‘Most Influential Asian-Australian Under 40’, a Superstar of STEM Dr Muneera Bano, ‘Introduction to Responsible AI Engineering’ will:
- Guide you through responsible AI methods and practice
- Explain how to put AI ethics principles into practice
- Explore the different levels of governance required for responsible AI systems
- Demonstrate how to create responsible AI by design using the Responsible AI Pattern Catalogue
- Show you how to build foundation model based systems
- Discuss responsible AI in the era of ChatGPT
Live audience Q&A follows an expert panel discussion featuring:
- Aurelie Jacquet, Chair of Australia’s AI standards committee, Consultant on AI, data ethics and responsible implementation of emerging technologies
- Dr Kobi Leins, Expert member of Australia’s AI standards committee and Honorary Senior Fellow, King’s College, London
- Louise McGrath, Head of Industry Development and Policy, Australian Industry Group
- Adam Stingemore, GM Engagement & Communications, Standards Australia
Standards Australia is excited to join the National AI Centre’s Responsible AI Network to deliver workshops and content to support Australian businesses to understand and use Australian standards to creatively and safely uptake AI.
Throughout 2023, Standards Australia will host a series of workshops targeted at businesses, industry representatives and governments, that will explore a variety of topics including:
- Introduction to AI standards;
- AI concepts, terminology and lifecycle;
- Standards for responsible and inclusive AI;
- AI Management Systems standard; and
- Data-centric AI
Winner of the QLD iAward Government & Public Sector Solution of the Year 2022 category, the Navigation Stack will compete against other state winners at the national iAwards in October.
This talk was held as part of our 2022 AI, Cyber, Modelling & Simulation for SME Growth Symposium
This panel was held as part of our 2022 AI, Cyber, Modelling & Simulation for SME Growth Symposium
This talk was presented as part of our 2022 AI, Cyber, Modelling & Simulation for SME Growth Symposium
This talk was presented as part of our 2022 AI, Cyber, Modelling & Simulation for SME Growth Symposium
This talk was presented as part of our 2022 AI, Cyber, Modelling & Simulation for SME Growth Symposium
This talk was presented as part of our 2022 AI, Cyber, Modelling & Simulation for SME Growth Symposium
The use of data and analytics is increasingly part of our joined-up digital world. In this talk at our 2022 AI, Cyber, Modelling & Simulation for SME Growth Symposium, Ian explains how AI can be used in an appropriate fashion to embrace data complexity and how data science can be used to harness value. Ian touches on the newly launched NSW Government AI assurance framework, as well as some of the newly developed standards focusing on AI.
Digital Twins have become prominent aids for decision-making in many application domains.
This talk will examine the mutual relationship between Artificial Intelligence technologies and Digital Twins and highlight the work being undertaken at the Industrial A.I. Research Centre at the University of South Australia and its partners.
Industry processes are generating large volumes of data. These processes might be capturing the manufacturing of components or the movement of patients through a hospital.
Techniques from the field of Artificial Intelligence (AI) can be leverage to understand the data and potentially enhance these processes or workflow: AI for Enhancing Workflow through Predictive Analytics.
This work cannot be performed in silos and requires expertise from many disciplines. This presentation will overview several projects where AI is being used to investigate data from medical domains and how the results will benefit end-users
Immersive analytics—using immersive technologies to visualise and support analytical reasoning through interaction—is becoming a prominent area in human-computer interaction research.
This talk will present the visualisation and immersive analytics work being undertaken within the Australian Research Centre for Interactive and Virtual Environments, and will present the opportunities where AI could contribute to area.
The increased dependency on technology and information, coupled with a growing attack surface, makes healthcare one of the biggest targets for cyberattacks as well as information security mistakes.
In 2021 we have new challenges to old problems in how to protect healthcare, ensure patient safety and make cybersecurity an enabler of safe clinical care.
The wine industry is constantly facing new challenges, and AI and cyber offer unique solutions to help both wineries and consumers.
In this presentation, Hennekam showcases how new technologies are being used to tackle the following problems:
- Water shortages in the face of heatwaves, plus longer and more frequent droughts due to climate change;
- Proving the authenticity of wines to combat wine counterfeiting, which costs the industry billions; and
- Finding new markets in the digital world, and data analytics to guide business decisions and customer interactions.
According to Gartner, Australian organisations are expected to spend over $4.9 billion on enterprise information security and risk management products and services by the end of 2021.
AI and Machine learning based cybersecurity solutions have been commoditized for various use cases such as automating complex security tasks, tracking malicious activity, endpoint protection, malware detection, among others. However, SME’s need to be aware of ML technology challenges & data prerequisites for successful deployment of these advanced technologies.
If successfully deployed these technologies can augment an SME’s cybersecurity by proactive monitoring for suspicious activity and resolving the problems before they impact, and by keeping track of individuals’ activities to further protect them from more sophisticated threats. Research estimates suggest that machine learning in cybersecurity will boost spending in big data, artificial intelligence (AI) and analytics around $96 billion by 2021.
This brief talk will focus on five important applications of machine learning to cybersecurity with the primary objective of highlighting practical management challenges and best practice suggestions for SMEs.
The ability to monitor and assess public attitudes is an operational and strategic necessity for any public facing organization. Opinions for and against entities and institutions flow freely through social media. Left alone, popular opinions get shared, unpopular opinions get swept under the rug, and consensus, be it transient and unstable or “common sense” and long-lived, can form organically.
However, this dynamic is open to abuse from actors who wish to manipulate such public opinion via online influence operations. In this presentation I will talk about:
- The importance of understanding online narratives including how adversarial entities may attempt to manipulate them via influence operations
- The challenges inherent in using data from the Surface, Deep and Dark Web to monitor and analyse online conversations
-The application of advanced open-source intelligence (OSINT) technology for understanding foreign influence
The Industrial Age operated with a 2-tiers, human environments and physical industrial environments in which people directly controlled their industrial machines. The current Information Age now operates with a 3-tiered arrangement in which human environments interface to information environments that in turn interface to physical environments.
As a consequence people no longer directly control their physical environments. They instead issue commands to information environments and those information environments directly control the physical environments. Indeed society now has a total reliance on information environments to the point that they directly influence and sometime control the human, information and physical environments of others.
If you can control a society’s information environments, then you can control that society. This has ushered in a new era of Information Warfare in which warfare is now being conducted inside of the information environments to effect the human, information and physical environments of others.
As Information Warfare matures this will increasingly become a contest between Artificial Intelligence systems in which we effectively have digital soldiers in digital armies inside information environments. Artificial Intelligence needs to progress to meet this challenge.
Pipelines are key to provide drinking water. However, pipes monitoring and maintenance are often complicated because they are buried underground.
Fluid transient waves have been used for assessing and monitoring the condition of pipelines to detect the presence of anomalies (e.g. leaks, blockages) and the occurrence of abnormal events (e.g. bursts).
Nonetheless, existing techniques require information in regards to the properties of the pipe (model-based techniques) or imply a large processing time to obtain results.
Artificial intelligence algorithms have proven to have significant potential in complementing existing techniques with the development of data-based pipeline inspection techniques.
This talk will present techniques that combine transient pressure waves and custom-designed Artificial Neural Networks (ANNs) for the active and passive inspection of water pipelines.
A second application of the use of artificial intelligence for de detection of cracks in pipelines using acoustic methods is presented to highlight how AI can support the operation of essential services such as water supply.
Protecting your business and your customers information is critical to survival in the digital economy. Every organisation is at risk and the cost to the economy is enormous. Protecting your business from cyber threats can be described as “it is not if, it is when!” SME’s will often ask – where do I start?
There are many tools and resources available along with Government initiatives to help combat these threats. There is also a shortage of cyber security workers in Australia today. It is estimated that this shortage is approximately 16,000 workers.
The Australian Cyber Collaboration Centre (A3C) was established in 2020 to help build cyber awareness and resilience in Australian corporates, SME’s, Government and Defence. The A3C is a member based Not-for-profit organisation helping business to launch new cyber products and services to global markets, providing access to cyber courses, solving real world cyber challenges through collaboration and building cyber awareness and resilience along the value chain.
Using an artificial immune system to detect malicious intrusions
Can people who know almost nothing about cyber attacks and malicious activity make any headway in this field?
Using a novel artificial neural network to model the joint probability distribution of data, as the self in an artificial immune system, has enabled a robust type of anomaly detection.
Anomalies can be detected individually or by comparing the joint distribution of live data to the reference distribution, leading to collective anomalies. We will examine its performance on a real web traffic data set.
Co-Lead, Law and Policy Theme, Cyber Security Cooperative Research Centre
Predictive Technologies: New Challenges in a Future-Focused World
Businesses have been increasingly relying on AI-based applications to assist decision-making processes and risk assessments in various organisational contexts.
AI-based technologies provide sensitive, future-focused data to assist decision-makers in evaluating risk. While adding a large amount of time-sensitive and relevant data that informs decision-makers in real time, AI-generated data also place additional burdens on decision-makers. This includes triggering cognitive biases, including invisible blind spots, and creating a persuasive virtual representation of reality that is difficult to refute.
In this talk, A/Prof Krebs will discuss the challenges stemming from human-machine interaction in the context of technology-assisted decision-making processes, and suggest ways for improving technology-assisted decision-making processes.
Transforming business operations with AI and Digital Twins
Improved computer processing power, access to the internet and the proliferation of robust, low cost sensors is paving the way for more and more businesses to take advantage of AI and create digital twins.
The technology is now becoming accessible to SMEs across a range of new sectors. Examples of such applications will be discussed.
Impact of developer and end user human issues on AI and cybersecurity. Humans are a key part of software development, including customers, designers, coders, testers and end users.
This talk will discuss several examples from recent work on handling human-centric issues when engineering software systems.
This includes personality impact on aspects of software development; understanding interpersonal issues in agile practices; incorporating end user emotions into software requirements engineering; providing proactive design critics in software tools to augment human decision making; modelling diverse human users of software systems; human-centric defect reporting; and the use of human-centric, domain-specific visual models for non-technical experts to specify and generate systems, without the need for software engineers at all.
The CSIRO SME Collaboration Nation initiative aims to double the number of SMEs that engage with publicly-funded R&D by 2025 by (i) amplifying existing facilitation programs and providing better connections between them; (ii) simplifying and removing barriers to collaboration; and (iii) helping businesses and researchers understand the value of collaboration for positive impact.
This presentation will take a tour through CSIRO’s current collaboration offerings (e.g. through SME Connect), showcase some examples of SME impact through collaboration and introduce some of the recommendations from our research project with RMIT that asked 800 SMEs what the real and perceived barriers they found are to collaboration.