Miguel Nicolelis - The Future of Human AugmentationNokia Bell Labs2024-10-22 | Miguel Nicolelis - The Future of Human AugmentationHow do we make explanations of recommendations beneficial to different users?Nokia Bell Labs2023-06-06 | Explainable AI is important but serves different purposes in different systems, which range from accuracy to fairness. In recommenders suggesting movies to an individual, an explanation tells the individual how his/her preferences were accurately considered in coming up with the recommendation. In recommenders suggesting content to a group, not all the preferences in the group can be considered all the time, and an explanation tells the group members how the recommendation was formulated in a fair way.
To find out more, see the Nokia Bell Labs Responsible AI hub: bell-labs.com/research-innovation/ai-software-systems/responsible-aiInvestigating Algorithmic Biases in Child Welfare Systems Through Human-Centered Data ScienceNokia Bell Labs2023-06-06 | In his talk “Investigating Algorithmic Biases in Child Welfare Systems Through Human-Centered Data Science”, Shion Guha from University of Toronto and team explored how algorithms are used in child-welfare management systems in the US. Based on an 18-month ethnographic study at the Wisconsin Department of Children and Families, the researchers found that these algorithms do not often use the right data: psychometric and medical assessments are used yet they are far less informative than case notes, medications, and family assessments, which come in plain text and are harder to automatically process. Also, the algorithms focus on minimizing legal risks rather than maximizing children’s welfare.
To find out more, see the Nokia Bell Labs Responsible AI hub: bell-labs.com/research-innovation/ai-software-systems/responsible-aiValue-based Engineering with IEEE 7000Nokia Bell Labs2023-06-06 | Value-based Engineering (VBE) is a systematic approach for the development of ethical AI systems. It involves three key phases: exploring concepts and content, understanding values, and aligning design with those values. In the first phase, all stakeholders and partners are consulted to define the components and data flows of the AI system. In the second phase, corporate and social values are ranked and prioritized to better understand what is important. Finally, in the third phase, these values are translated into ethical design requirements that guide the development of the AI system. VBE has been shown to generate significantly more valuable and innovative product ideas in response to moral challenges, compared to traditional product roadmap approaches.
To find out more, see the Nokia Bell Labs Responsible AI hub: bell-labs.com/research-innovation/ai-software-systems/responsible-aiMind the gap: From predictions to ML-informed decisionsNokia Bell Labs2023-06-06 | People interact with machines for a variety of reasons, for example, to predict and assess loan risks. Typically, interactions between people and machines range between two extremes: humans tend to either under-rely on an algorithm by ignoring its recommendations (algorithmic aversion) or over-rely on it by blindly accepting any recommendation (automation bias). To characterize human-AI interactions within that range, Maria De-Arteaga from the University of Texas at Austin took the case of a child-welfare risk assessment tool deployed in the US county of Allegheny County. This tool helps social workers to decide which children “in the system” need more attention. It does so by associating a risk score with each child. The researchers found that the tool made the workers’ job more efficient (higher number of screen-in cases) and, luckily, they did not rely on the tool when it was likely to make the wrong call. In general, interactions with any AI system should strike the right balance between algorithmic aversion and automation bias, and how to do so is extremely hard and subject to ongoing research.
If you wish to know more, do watch the talk below and please read:
1. A case for humans-in-the-loop: Decisions in the presence of erroneous algorithmic scores arxiv.org/abs/2002.08035 2. Leveraging expert consistency to improve algorithmic decision support arxiv.org/pdf/2101.09648.pdf
To find out more, see the Nokia Bell Labs Responsible AI hub: bell-labs.com/research-innovation/ai-software-systems/responsible-aiThe Ethics of Emotion in Artificial Intelligence SystemsNokia Bell Labs2023-06-06 | AI systems that analyze human emotions are called Artificial Emotional Intelligence (AEI), and their ethical implications have attracted considerable attention. Most of today’s AI technologies such as facial recognition systems, digital phenotyping, and sentiment analysis are grounded on the Basic Emotion Theory. In his work, Luke challenged the use of this theory and argued that the lack of consensus on how to model emotions raises the question of whether it is ever ethically appropriate to develop and deploy Artificial Emotional Intelligence (AEI) for public consumption.
To find out more, see the Nokia Bell Labs Responsible AI hub: bell-labs.com/research-innovation/ai-software-systems/responsible-aiPrivacy and Synthetic Data: The Good, The Bad, and The UglyNokia Bell Labs2023-06-06 | To make a dataset more privacy preserving, a generative adversarial model trained with differential privacy generates a synthetic version of the dataset. This synthetic dataset preserves the original dataset’s statistical properties, while minimizing privacy risks. However, the preservation works less well for groups underrepresented in the data. If an AI were to be trained upon the synthetic dataset, it would be unfairly inaccurate. Future generative models should not only reproduce the original data in a privacy-preserving manner but also guarantee fairness across subgroups.
Reference links:
1. Robin Hood and Matthew Effects: Differential Privacy Has Disparate Impact on Synthetic Data https://proceedings.mlr.press/v162/ganev22a/ganev22a.pdf 2. Exploiting Unintended Feature Leakage in Collaborative Learning arxiv.org/pdf/1805.04049.pdf
To find out more, see the Nokia Bell Labs Responsible AI hub: bell-labs.com/research-innovation/ai-software-systems/responsible-aiDeep Neural Networks, Explanations, and RationalityNokia Bell Labs2023-06-06 | As AI increasingly becomes a part of our daily lives, its decisions can have far-reaching effects on humanity. Yet, the explanations for these decisions often leave us feeling puzzled and confused. Unlike the clear, logical reasoning that underlies human explanations, AI's justifications can seem opaque and difficult to understand. But what if we could train two AI systems to engage in a kind of duel, in which the outcome would be a human-like explanation for the AI's decisions? This is the promise of Generative Adversarial Networks, in which one system generates an explanation while the other determines whether that explanation was created by a machine or a human. The result is an explanation that is both intelligible to us and faithful to the workings of the AI.
To find out more, see the Nokia Bell Labs Responsible AI hub: bell-labs.com/research-innovation/ai-software-systems/responsible-aiDesigning for Online Safety: Towards More Equitable and Trustworthy SystemsNokia Bell Labs2023-06-06 | Online threats are actions that cause discomfort or put one’s online identity at risk (e.g., stolen login credentials). In her research, Daricia Wilkinson of Clemson University focused on understanding how people perceive, evaluate, and mitigate these threats. She did so by deploying a survey involving more than five hundred participants across nine Caribbean countries (these countries suffer from imbalances, inequalities, and injustices compared to Western ones, and are often overlooked by the scientific literature). It turns out that 90% of the participants experienced some forms of online threats, including unwanted targeted advertising; unsolicited content; and compromised login information.
To find out more, see the Nokia Bell Labs Responsible AI hub: bell-labs.com/research-innovation/ai-software-systems/responsible-ai6G network as a sensor proof of conceptNokia Bell Labs2023-05-19 | 6G networks will be able to sense their surroundings, allowing us to generate highly-realized digital versions of the physical world. This digital awareness would turn the network into our 6th sense. At MWC23, Nokia Bell Labs showcased the very first 6G sensing capability using real prototype radio equipment.
To find out more, see the Nokia Bell Labs 6G hub: bell-labs.com/research-innovation/network-fundamentals/what-is-6gNokia and Bosch set a new bar for 5G positioning and look ahead to 6GNokia Bell Labs2023-05-18 | Nokia and Bosch announced their first strategic collaboration in 2017 to develop industrial IoT and sensing solutions. The two companies have now begun conducting joint research in the next generation of networking, investigating how future 6G networks could be used for both communications and sensing when they are commercially available by the end of the decade.
To find out more, see the Nokia Bell Labs 6G hub: bell-labs.com/research-innovation/network-fundamentals/what-is-6gNokia, DOCOMO and NTT make two key 6G advancesNokia Bell Labs2023-05-16 | Nokia, DOCOMO and NTT first launched their 6G collaboration in June, 2022 and have achieved two key technological milestones seven months after that announcement. The first is the implementation of artificial intelligence (AI) and machine learning (ML) into the radio air interface, effectively giving 6G radios the ability to learn. The second is the utilization of new sub-terahertz (sub-THz) spectrum to dramatically boost network capacity. The collaboration continues a long history of pioneering between DOCOMO and Nokia. Starting with 3G in the 1990s, to 4G, and to today’s 5G including collaboration in 5G O-RAN, the companies have turned ideas into implementations and pushed boundaries to create optimized experiences for end users.
To find out more, see the Nokia Bell Labs 6G hub: bell-labs.com/research-innovation/network-fundamentals/what-is-6gNokia Remote Environmental MonitoringNokia Bell Labs2023-03-27 | Effective large scale, low maintenance remote environmental sensing and monitoring solutions for early disaster detection, warning, and prevention.
See more: bell-labs.com/remote-environmental-monitoring To find out more, see the Nokia Bell Labs Automation hub: bell-labs.com/research-innovation/automation To find out more, see the Nokia Bell Labs Semiconductors and Devices hub: bell-labs.com/research-innovation/semiconductors-devicesSanja Scepanovic was drawn to science through natureNokia Bell Labs2023-03-17 | The landscape of her native Montenegro and how the natural beauty of her surroundings worked so perfectly together first drew Sanja Scepanovic to science. Her journey into the male-dominated field continued through Finland and several other stops before arriving in Cambridge, England, where as part of the AI research lab she works for Nokia Bell Labs as a research scientist at a social dynamics team. Listen as Michelle Fernandez, Head of Technology Content and Marketing, and Sanja dive deeper into Sanja’s love of nature and science and how that has shaped her STEM career.An All-Silicon E-Band Backhaul-on-Glass Frequency Division Duplex ModuleNokia Bell Labs2023-03-16 | Watch Nokia Bell Labs researcher Shahriar Shahramian explain the work he has done on E-Band communication that was presented at the Radio Frequency Integrated Circuit (RFIC) Symposium in 2022. He says it is part of the extraordinary communications infrastructure that helped the world weather a global pandemic. He and his team won third place in the RFIC Paper Awards for their presentation of an All-Silicon Backhaul-on-Glass Frequency Division Duplex Module.
See presentation: ieeexplore.ieee.org/document/9863150 To find out more, see the Nokia Bell Labs Semiconductors and Devices hub: bell-labs.com/research-innovation/semiconductors-devicesA D-Band Radio-on-Glass Module for Spectrally-Efficient and Low-Cost Wireless BackhaulNokia Bell Labs2023-03-16 | Watch Nokia Bell Labs researcher Shahriar Shahramian explain the work he has done on D-Band communication that was presented at the Radio Frequency Integrated Circuit (RFIC) Symposium in 2020. He and his team won first place in the RFIC Paper Awards for their presentation of a Radio-on-Glass Module for Spectrally-Efficient and Low-Cost Wireless Backhaul.
See presentation: ieeexplore.ieee.org/document/9218437 To find out more, see the Nokia Bell Labs Semiconductors and Devices hub: bell-labs.com/research-innovation/semiconductors-devicesAdriana Vasilache always wanted to figure out how things workedNokia Bell Labs2023-03-14 | Growing up in Romania, Adriana Vasilache had to overcome numerous obstacles along the way. But thanks to family support and her natural curiosity she pressed ahead to become an expert in biomedical and image data compression. Renowned for her work in speech and audio coding, she says the secret to be heard as a woman is to remain true to one’s authentic self. Listen as Michelle Fernandez, Head of Technology Content and Marketing, and Adriana Vasilache, Principal Researcher at Nokia Technologies, discuss Adriana’s inspiration and sage advice for women about pursuing a career in STEM.Anne Lee has been passionate about science since she was a little girlNokia Bell Labs2023-03-08 | It began with Saturday morning cartoons about science and continued with a love of learning math in school. But the 1969 mission to the moon was what really launched Anne Lee’s lifelong passion for the world of science and technology. With her father’s steady support, she has gone on to become a pioneering researcher at Nokia. Listen as Michelle Fernandez, Head of Technology Content and Marketing, and Anne Lee, Senior Technology Advisor, discuss what sparked Anne’s interest in science.Nokia Bell Labs and Keysight set world record in optical communicationsNokia Bell Labs2023-03-06 | Recently, Nokia Bell Labs and Keysight Technologies partnered to test a 260 GBaud ultra-high-speed optical signal transmission over 100 km of standard single-mode fiber (SSMF) at the European Conference on Optical Communication (ECOC) 2022 in Basel, Switzerland, exceeding the previous record of 220 GBaud. The result - a new world record in coherent optical communications. In this video, you will go behind the scenes and learn more about this incredible collaboration.
To find out more, see the Nokia Bell Labs Network Fundamentals hub: bell-labs.com/research-innovation/network-fundamentalsDistributed massive MIMO: Solving the uplink challengeNokia Bell Labs2023-02-26 | The way we use our mobile devices is dramatically changing as we livestream content, create high bit-rate videos and regularly participate in video conferencing. Additionally, as the Internet of Things becomes more pervasive and data is exchanged in real time it means that billions of devices are uploading, not just downloading data.
Expanding uplink capacity without sacrificing downlink performance is becoming a big challenge for Communication Service Providers. To meet that challenge, AT&T and Nokia Bell Labs are collaborating in the lab on a prototype for an emerging technology called distributed massive MIMO. Hear from Dave Wolter, Assistant Vice President for Radio Technology at AT&T Labs, and Peter Vetter, President of Nokia Bell Labs Core Research, as they discuss how distributed massive MIMO works and how it would greatly boost uplink capacity in the 5G-Advanced era.
To find out more, see the Nokia Bell Labs Network Fundamentals hub: bell-labs.com/research-innovation/network-fundamentalsCamera as a ServiceNokia Bell Labs2023-02-26 | Nokia Bell Labs’ multifunctional, secure and privacy preserving camera platform with embedded machine learning and automated MLOps to enable various industrial and consumer facing applications.
To find out more, see the Nokia Bell Labs 6G hub: bell-labs.com/research-innovation/network-fundamentals/what-is-6gAugmenting human potential in the 6G eraNokia Bell Labs2023-02-26 | The 6G era will be defined by the symbiosis of digital, physical, and biological worlds with the goal to augment human productivity and wellbeing. While in the 5G era, with thanks to the massive scale deployment of sensors, the digital world perfectly captures past and current states of the physical world, the connection of these two worlds with the biological or cognitive world remain largely unaddressed. We believe that in the 6G era cognitive systems will anticipate individual and collective intents to plan for actions in the worlds that optimally serve human needs. For that to happen we will need to witness significant advances in artificial intelligence, computing and sensing technologies. The 6G network will be the essential infrastructure for the integration of these future capabilities.
To find out more, see the Nokia Bell Labs 6G hub: bell-labs.com/research-innovation/network-fundamentals/what-is-6gMoving toward integrated sensing and communications in 6GNokia Bell Labs2023-02-26 | Watch Dr. Thorsten Wild, Head of Next Generation Wireless research at Nokia Bell Labs explain how the network will become the sensor in the 6G communications era.
6G will give our networks the ability to sense. By bouncing signals off objects, the network will determine what’s there, how things are moving – and potentially even what they’re made of. The network becomes our sixth sense, extending our awareness beyond our immediate surroundings.
This sensing capability can be used to map a digital version of the physical world. By interacting with this ‘digital twin’, we could extend our senses to every point the network touches.
To find out more, see the Nokia Bell Labs Automation hub: bell-labs.com/research-innovation/automationThe Future of manufacturingNokia Bell Labs2023-02-26 | Get a glimpse inside the factory of the future that utilizes private wireless networks including 5G and edge cloud connectivity enabling robots to complete mission-critical tasks, LTE for 3D printing and automated guided vehicles, and WiFi for extracting telemetry data from machinery.
To find out more, see the Nokia Bell Labs Automation hub: bell-labs.com/research-innovation/automationRadio on GlassNokia Bell Labs2023-02-26 | Hear from Nokia Bell Labs researcher, Shahriar Shahramian, explain how radio frequency integrated circuit (RFIC) technology on a glass substrate will enable extreme data throughput for sub-terahertz communication.
To find out more, see the Nokia Bell Labs Semiconductors and Devices hub: bell-labs.com/research-innovation/semiconductors-devices6G AI native air interface proof of conceptNokia Bell Labs2023-02-26 | By pairing an AI-based learned waveform in a transmitter with a deep-learning receiver, Nokia Bell Labs was able to design and implement a learning air interface that transmits data efficiently under many different scenarios. It effectively gives radios the ability to learn and we believe a dynamic AI/ML-defined native air interface will be a key component of 6G networking in the future.
To find out more, see the Nokia Bell Labs 6G hub: bell-labs.com/research-innovation/network-fundamentals/what-is-6g6G sub-THz proof of conceptNokia Bell Labs2023-02-26 | The sub-terahertz (sub-THz) bands have never been designated for cellular use because of their propagation characteristics, but new techniques such as beamforming could open up those frequencies to future 6G networks. In this proof-of-concept, Nokia Bell Labs was able to utilize sub-THz spectrum to dramatically boost network capacity.
To find out more, see the Nokia Bell Labs 6G hub: bell-labs.com/research-innovation/network-fundamentals/what-is-6gAutonomous Industrial Monitoring ServiceNokia Bell Labs2023-02-26 | Pairing a fleet of small, autonomous drones with a digital twin of your facility, Autonomous Industrial Monitoring Service (AIMS) builds on the foundation of private wireless to provide warehouse and vertical farm operators with continual updates on the status of their inventory.
To find out more, see the Nokia Bell Labs Automation hub: bell-labs.com/research-innovation/automationNokia is networking the MoonNokia Bell Labs2023-02-26 | Like shelter, food and life support, communications will be a crucial component of any future lunar mission. Not only will astronauts need to communicate with one another and mission control, they will also need mobile voice and data capabilities for a number of applications: high-definition video streaming, remote monitoring of sensors and instruments, exchange of telemetry and biometric data and critical command and control functions such as the remote control of lunar rovers. All these applications require advanced communication capabilities.
Nokia is working with NASA and mission partners Intuitive Machines and Lunar Outpost to deploy an LTE/4G network on the Moon in order to demonstrate how advanced cellular technologies can be used for critical communications in future planetary exploration. The uncrewed, robotic mission will help pave the way to a sustainable human presence on the lunar surface.
To find out more, see the Nokia Bell Labs Lunar Network hub: bell-labs.com/research-innovation/network-fundamentals/first-cellular-network-on-the-moonHuman-Centered Approaches to Supporting Fairness in AINokia Bell Labs2023-02-26 | Vivek Krishnamurthy of University of Ottawa gave a talk titled “Human-Centered Approaches to Supporting Fairness in AI”. The main message was that well-meaning AI applications might well have a detrimental impact on human rights. To see why, consider the use of algorithms in a criminal justice tool used to assess the likelihood of a defendant becoming recidivist. In the US, such a system is called Correctional Offender Management Profiling for Alternative Sanctions (COMPAS). It was initially designed to make sentencing faster and fairer. Yet, recently it has been repeatedly shown to be racially biased despite not using any racial information in its predictions. That is because race correlates with a feature that COMPAS does use – the defendant’s zip code. When recommending years of incarceration based upon the likelihood of re-offending, COMPAS will effectively discriminate based on race. The kind of problem COMPAS illustrates is more common than one would think, especially in consumer AI products. This is because these products are created through deployment cycles that are so fast that they make ethical or fairness considerations an afterthought. This needs to change. Businesses can ensure their products do not impact any human right by, for example, utilizing auditing tools based on current policies, legislation and regulations.
To find out more, see the Nokia Bell Labs Responsible AI hub: bell-labs.com/research-innovation/ai-software-systems/responsible-aiHuman-Centered Explainable AI (XAI): From Algorithms to User ExperiencesNokia Bell Labs2023-02-26 | Vera Liao from Microsoft Research Montréal showed that empirical studies have found no conclusive evidence on whether AI systems that offer interpretable explanations to their users are more effective and trustworthy than corresponding black-box models. In theory, there are two types of users. In Kahneman’s terms, users who carefully read explanations engage in slow thinking (System 2), while those who quickly process explanations engage in fast thinking (System 1). In practice, humans that interact with AI systems are a single type of user: fast thinkers. The problem is that current explainability work assumes the user is the ideal user (System 2) rather than the typical user (System 1). The challenge for future research is to generate explanations that lead to more “System 2 thinking.”
To find out more, see the Nokia Bell Labs Responsible AI hub: bell-labs.com/research-innovation/ai-software-systems/responsible-aiResponsible AI – getting the human back into the loopNokia Bell Labs2023-02-26 | Simone Stumpf of the University of Glasgow focused on the different research works from which she developed the four principles that govern explanations in AI systems. Explanations need to be: sound (faithful to the underlying machine-learning algorithm), complete (explain the training data upon which the predictions are made), iterative (incrementally reveal themselves) and not overwhelming for the users. In different experiments, Simone and her team showed that having users interacting with and offering feedback to an AI system would increase the accuracy of the system itself. They also found the best feedback was offered by the users who had a pretty good understanding of how the system worked. In other words, these users had the so-called mental model of the system, which was assessed by asking them questions about how the system was understood to work. Interestingly, since her work has focused on interactive AI systems, Simone opted for the word “interpretability” rather than “explanability.” That is because “explanability” is system-centric (the system provides explanations about itself), while “interpretability” is user-centric (the end user interacts with the system and helps develop explanations). Simone’s final message was that, as AI systems are complex socio-technical systems, we as AI researchers and developers need to involve the end users throughout the whole AI lifecycle, which will be her next big research challenge.
To find out more, see the Nokia Bell Labs Responsible AI hub: bell-labs.com/research-innovation/ai-software-systems/responsible-aiThe Future of AI for Social GoodNokia Bell Labs2023-02-26 | Saiph Savage of Northeastern University showed how crowd-sourcing platforms could be made fairer. These platforms have been extensively used to label training data (e.g., images or text). The problem is that crowd-workers earn, on average, below the minimum wage (less than $2 per hour), contributing to a new societal class of workers – “the new poor.” Inspired by value-sensitive design and social justice theory, Saiph developed fairer crowd-sourcing platforms by building browser plug-ins that workers could use when completing their tasks. These plug-ins act as an AI-based coach that gives workers advice. The AI coach is equipped with a reinforcement learning model that can identify which strategies worked best in the past (i.e., which strategies helped workers achieve their goals and meet their needs). The researchers found that, compared to human-based coaching, AI-based coaching resulted in faster and more accurate workers who ended up increasing their wages. In the future, the researchers plan to build plug-ins that help workers develop their skills and, ultimately, their creativity.
To find out more, see the Nokia Bell Labs Responsible AI hub: bell-labs.com/research-innovation/ai-software-systems/responsible-aiWhy we need Responsible AINokia Bell Labs2023-02-26 | There is no question that AI has delivered numerous benefits in our everyday lives, but AI also carries risks. In order for AI to be embraced by our society, we need to make AI ethical. AI systems must be fair, reliable and accountable. They must cause no direct harm. They must be environmentally and socially sustainable. And they must protect our privacy. This what Nokia Bell Labs calls Responsible AI.
To find out more, see the Nokia Bell Labs Responsible AI hub: bell-labs.com/research-innovation/ai-software-systems/responsible-aiEthics in AI: A Challenging TaskNokia Bell Labs2023-02-26 | Ricardo Baeza-Yates from Institute for Experiential AI at Northeastern University gave a talk titled “Ethics in AI: A Challenging Task”. Machine-learning algorithms are vulnerable to data biases that are often amplified and lead to unfair decision-making. For example, a system used by the New York State Department of Justice to determine bail amounts is negatively biased toward people of color. Yet, crucially, that bias has little to do with the algorithm itself, but rather how it is trained. The algorithm is trained upon judges’ decisions, which are racially biased. What's better? A biased algorithm? Or a noisy judge? A biased algorithm will produce the same outcome for two very similar cases, whereas a noisy judge (e.g., research showed that judges tend to be harsher before lunch) might well decide on two different sentences for the same cases. Unlike the algorithm, the judge is not deterministic.
Furthermore, beyond the problem of data bias, even in the presence of a perfect training dataset, a machine-learning expert should still consider that the algorithm has been:
• Trained with data that does not capture the entire context of the problem.
• Optimized for accuracy, which might not always be important. Arguably, it would be more important to measure the impact of misclassifications, however rare they may be. In the medical domain, where misclassifications have significant consequences, measuring their impact is far more important than measuring accuracy per se.
• Designed to produce deterministic verifiable outputs. Yet, in circumstances of low confidence, rather than a clear-cut answer, it could be less harmful to output “I do not know.”
To find out more, see the Nokia Bell Labs Responsible AI hub: bell-labs.com/research-innovation/ai-software-systems/responsible-aiTrustworthy AINokia Bell Labs2023-02-26 | In his talk “Trustworthy AI”, Mike Hind illustrated how IBM is making AI systems trustworthy. Trust is very difficult to build and very easy to destroy, as Thomas J. Watson, the founder of IBM, once said. That is why trust in AI has become a top priority for IBM. Trust helps maintain brand reputation, comply with regulation and safeguard against litigation. Practically, IBM developed and made publicly available a suite of Trustworthy AI toolkits: AI Fairness 360 measures data biases, Adversarial Robustness 360 allows researchers and developers to evaluate and defend machine-learning models against adversarial threats, and AI Explainability 360 adds an extra layer of explanability to machine-learning models. While these tools make specific parts of the AI lifecycle trustworthy, the whole lifecycle needs to be made trustworthy. To this end, IBM came up with the idea of “FactSheets”. These are automatically generated facts collected during the entire lifecycle of an AI system (e.g., measured characteristics of the dataset or the model, actions taken during the creation and deployment of the model).
To find out more, see the Nokia Bell Labs Responsible AI hub: bell-labs.com/research-innovation/ai-software-systems/responsible-aiHow to Unknow the Uncertainties in Data ScienceNokia Bell Labs2023-02-26 | Michael Muller of IBM Research and Angelika Strohmayer of the Northumbria School of Design gave a talk titled “How to Unknow the Uncertainties in Data Science”. The talk clearly showed that in data science each data refinement step – from data collection to data cleaning to feature engineering – results in data modification. This process of “forgetting” information could be good or bad. It is good when it allows for engineering features that are simple and easy to explain. It is bad, however, when it ends up removing information that could prove to be crucial at a later stage. A case in point is an access control system that predicts adherence to COVID-19 regulations in an office (e.g., social distancing, face covering). The system is designed to only use employees’ movements and proximity and not their faces in order to protect their privacy. However, once deployed, it becomes increasingly clear that faces are a crucial part of predicting adherence to COVID-19 regulations and, as such, ignoring them significantly compromises the system’s accuracy. Therefore, it would be generally useful to keep track of two types of memories (through a sort of “data versioning”): data memory, to keep track of which data pieces are available at each data refinement step, and social memory, to keep track of why certain pieces are ignored or modified at a given step. Both types of memory could be part of future programming languages to make it possible for programmers to go back in time, if need to be.
To find out more, see the Nokia Bell Labs Responsible AI hub: bell-labs.com/research-innovation/ai-software-systems/responsible-aiHuman-Centered Approaches to Supporting Fairness in AINokia Bell Labs2023-02-26 | Michael Madaio of Microsoft Research, NYC, gave a talk titled “Human-Centered Approaches to Supporting Fairness in AI”. AI-industry practitioners are increasingly asked to adhere to principles for fair and responsible AI. The problem is that currently responsible AI efforts focus on the training and testing of AI algorithms, leaving out the entire AI design lifecycle and ignoring what happens during design, prototyping and post-deployment. That is why Microsoft developed a comprehensive set of checklists, one for each of their AI design phases: envision, definition, prototype, build, launch, and evolve. Across different deployments within the company, they found that fairness is deeply contextual (e.g., the same video analytics system might have very different fairness requirements in the US than in Europe). As such, one-size-fits-all checklists do not work. The next logical step is to then figure out how to paraphrase checklists according to the context. Even as the checklist becomes more customized to context, there is still another challenge: how to embed responsible AI in formal corporate processes. As Debra Myerson puts it, that could be achieved by “tempered radicals” - employees who “slowly but surely create corporate change by pushing organizations through persistent small steps.”
To find out more, see the Nokia Bell Labs Responsible AI hub: bell-labs.com/research-innovation/ai-software-systems/responsible-aiDesigning Artificial Intelligence to Navigate Societal DisagreementNokia Bell Labs2023-02-26 | In his talk “Designing Artificial Intelligence to Navigate Societal Disagreement”, Michael Bernstein of Stanford University questioned the current use of ground truth labels in (mostly) all AI algorithms. In any ML classification task (e.g., online comment toxicity, misinformation), an algorithm requires a training set with ground truth labels. These are often provided by experts. Even experts, however, disagree with each other on these labels (e.g., Reddit moderators disagree with each other on 29% of posts labeled as toxic). The current practice of resolving these disagreements is via majority vote. However, this aggregation overrides labels from minorities. To partly fix that, Bernstein and his team introduced a new approach called “jury learning”. This is a supervised ML approach that “prescribes” which groups will impact a classifier’s prediction in a way that under-represented groups will still count.Maintaining fairness under distribution shiftNokia Bell Labs2023-02-26 | Jessica Schrouff from Google Research illustrated the problem of distribution shift for two healthcare applications: one for dermatology and the other for electronic health records. The problem of distribution shift occurs when a machine-learning model behaves differently from source (at the training stage) to target (at the deployment stage), making the model inconsistent and unreliable. In the case of dermatology, Jessica’s team found that during the deployment stage the model produced more misclassifications of skin conditions among elderly patients than among younger ones. More broadly, the researchers found four types of distribution shift: demographic shift (e.g., sex), covariate shift (e.g., a new device captures the same image using another type of camera), label shift (e.g., prevalence of disease) and compound shift (any combination of demographic or label shift). Her team proposed causal graphs for modeling the dependency among groups of variables (e.g., demographics, features, labels) in the two environments, the source and the target, which partly quantify the presence of distribution shifts.
To find out more, see the Nokia Bell Labs Responsible AI hub: bell-labs.com/research-innovation/ai-software-systems/responsible-aiIs Legal AI Ethical AI?Nokia Bell Labs2023-02-26 | In their talk, Jessica Fjeld, Adam Nagy and Mason Kortz of Harvard University posed the question “Is legal AI ethical AI?” They clearly showed that this question is difficult to answer because it is hard to convey what is ethical and moral in meaningful legal language. AI algorithms will soon be required to meet key ethical principles such as non-discrimination and fairness. Typically, these principles need to be written in unambiguous legal language which can then be used to create accountability. These principles would then need to be incorporated into algorithms. As a result, there are four layers of abstraction: moral (one’s personal judgements of what is good and bad), legal (what the community agrees to be right or wrong), heuristics (the assumption or information an algorithm is based upon), and algorithms (how the heuristics are implemented). This process of abstraction comes with a huge challenge: algorithms end up becoming a proxy of a proxy of a proxy of our beloved moral principle! There are many ways this can go wrong. For instance, consider the idea of fair housing in the U.S. As per the fair housing law, when awarding mortgages one cannot discriminate based on race, sex, nationality or any other protected attributes. This is why to reach the so-called “fairness through unawareness,” mortgage lending may choose not to use protected attributes such as race. Yet, in so doing, they may engage in a different, more subtle, form of discrimination. For example, not having any information about race makes it impossible to know whether a dataset is well-balanced in terms of race or not, and certain variables the algorithm does use to deny a mortgage may correlate with race (e.g., income).
To find out more, see the Nokia Bell Labs Responsible AI hub: bell-labs.com/research-innovation/ai-software-systems/responsible-aiA critical look at computer vision algorithms and data practicesNokia Bell Labs2023-02-26 | Jahna Otterbacher of the Open University of Cyprus gave a talk titled “It’s about time…and perspective: A critical look at proprietary computer vision algorithms and the data practices behind them”. Face recognition systems (e.g., in-store CCTVs for crime prevention) require a huge amount of data to be trained. To obtain such data, developers typically resort to crowd-sourced annotations where a crowd-worker is presented with an image (e.g., a face) and provides labels describing the image (e.g., gender, skin color). In her talk, Jahna highlighted two main sources of bias for crowd-workers. First, these workers describe not only the image’s content but also what is “worth saying” about it or inferring from it (e.g., inferences of one’s nationality from their skin color). Second, the workers’ labeling is affected by large exogenous events. For example, during the COVID-19 pandemic, crowd-workers used health labels far more than they did during other periods; and when Black Lives Matters movement had momentum, crowd-workers used more abstract labels for describing the physical appearance of Asian and black individuals.
Fortunately, there are aspects one could control during the labeling process. More specifically, to mitigate these biases, one needs to consider:
1. Compared to close-ended responses, open-ended ones are more subject to stereotyping.
2. The set of crowd-workers and the set of people depicted in the images ideally need to be demographically alike.
3. Longer tasks lead to more abstract and inferential responses.
4. Possible exogenous events that might affect the labeling at hand.
To find out more, see the Nokia Bell Labs Responsible AI hub: bell-labs.com/research-innovation/ai-software-systems/responsible-aiBridging AI and HCI: Incorporating Human Values into the Development of AI TechnologiesNokia Bell Labs2023-02-26 | Haiyi Zhu from Carnegie Mellon University presented two AI-supported tools that incorporate community values, one for a Wikipedia content-moderation system, and the other for a child-protection system.
The first tool had the goal of reducing the effort involved in maintaining Wikipedia communities. To that end, the tool needed to foster engagement among diverse editor groups. However, there is a trade-off between content moderation and engagement. Content moderation ideally requires a low false-negative rate (to catch all possible damaging contributions). By contrast, engagement requires a low false-positive rate (to not falsely label good edits and, as such, discourage potentially good contributions). To strike the right balance between false positives and false negatives, the tool allowed AI developers to fine-tune its model’s parameters in interactive ways.
The second tool had the goal of assisting U.S. social workers in assessing the risk of child maltreatment. Based on semi-structured interviews, the researchers surprisingly found that these workers were more likely to override the tool’s predictions when these predictions did not align with their own personal judgments.
To find out more, see the Nokia Bell Labs Responsible AI hub: bell-labs.com/research-innovation/ai-software-systems/responsible-aiWho are we listening to? Building blocks for trustworthy AINokia Bell Labs2023-02-26 | Hillary Juma of Mozilla Foundation presented the Common Voice project. This project aims at tackling a key barrier to the adoption of voice technologies: not all voices, languages, accents or dialects are equally understood by current technologies (e.g., Amazon Alexa, Apple Siri), making them less accessible to a considerable portion of the population. To partly fix this, the project collected the largest publicly available audio dataset (13,000 hours of voice recordings) in 76 languages. Collection of this dataset involved more than 200 developers and 393,000 volunteers around the world as the project was able to include as many as 76 languages by building and maintaining a community wherein everyone in the world can contribute. Underserved populations could now have better access to modern audio technologies. As just one example, Common Voice has allowed Rwandans to have a chatbot tool in their own language to spread information about the COVID-19 pandemic in a timely fashion.
To find out more, see the Nokia Bell Labs Responsible AI hub: bell-labs.com/research-innovation/ai-software-systems/responsible-aiHonoring the 2022 Bell Labs FellowsNokia Bell Labs2022-12-09 | On Tuesday, November 29, 2022, Nokia inducted five new members into the Bell Labs Fellows, the company’s highest technical honor reserved for individuals who have made outstanding and sustained contributions to Nokia and the communications industry in the areas of research and development.Camera as a ServiceNokia Bell Labs2022-11-15 | Nokia Bell Labs’ multifunctional, secure and privacy preserving camera platform with embedded machine learning and automated MLOps to enable various industrial and consumer facing applications.
See more: bell-labs.com/camera-as-a-serviceShaping the Future of (Industrial) Collaborative Business CreationNokia Bell Labs2022-07-11 | Thierry Klein, President Bell Labs Solutions Research, gave a presentation "Shaping the Future of (Industrial) Collaborative Business Creation" during the BME Innovation Day on June 16, 2022 in Budapest, Hungary.
https://innovacio.bme.hu/innovacios-nap/
To find out more, see the Nokia Bell Labs Automation hub: bell-labs.com/research-innovation/automationThe journey to 6G, from early crystal ball gazing to crystalizing research conceptsNokia Bell Labs2022-05-25 | Peter Vetter, president of Bell Labs Core Research at Nokia, talks about the journey to 6G at the 7th IEEE 5G++ Summit Dresden.