JAN 2023
FARI Conference 2022: Summary and Recommendations
On 5-6 July 2022, the Brussels FARI AI Institute for the Common Good (ULB-VUB) organised a conference on AI, Data and Robotics in Cities. During this event, public officials, researchers, international city representatives and citizens were invited. An Observer Committee attended all sessions to identify what the priorities could be for the Brussels Capital Region (BCR). It comprised Brussels public servants and researchers. This document summarizes the results and recommendations of the group of observers.
Authors: Carmen Mazijn (VUB) Julien Gossé (ULB), Geoffroy Alsteens (CIRB), Sarah El Sialiti (Innoviris), Cedric Verstraete (Innoviris)
Editor: Carl Mörch (FARI)
At the beginning of summer 2022, FARI – AI Institute for the Common Good (ULB-VUB), organised its first conference. Over the course of two days, policymakers, researchers and the general public came together to discuss what AI for the Common Good means and how it can become reality in the Brussels Capital Region. This document contains a short summary for each of the six sessions, which took place at Bozar.
Artificial Intelligence (AI) innovates at a rapid pace and is a prime driver of the economy. Yet, AI contains clear-obscure duality, positive and negative aspects. The positive side sparks great promises to address some of the most urgent global challenges, e.g. optimising food production, minimising waste, predicting disease and supporting learning.
On the negative side, AI has led to ethnic profiling in the U.S. and in China, and unjustified suspicion of fraud in childcare benefits payments by the Netherlands’ government. Moreover, data governance and enormous energy consumption raise concerns. These challenges surrounding AI come with many questions.
FARI endeavours to provide answers to these questions against the Brussels backdrop. But first of all, how do we define AI for the Common Good? Does it mean that we should try to do as much good as possible with AI? Or that AI models have to be good? It is possible to do harm with good AI and to do good to certain people with bad AI.
Likewise, the difference between “Sustainable AI” and “AI for sustainable development” is subtle but distinct. Furthermore, there is a need to align with other frameworks on international, European and local levels including the Sustainable Development Goals, Horizon Europe and the European Chip Act, and to figure out what is important for tackling local challenges.
A practical definition of AI, Data and Robotics for the Common Good was not found or given during this session. Yet, technology decides how we look at the world, shapes the way we think and shapes our values. For one of the panel members, the best way to control the future is to create it ourselves; we prefer to base this technological development on European values. However, the paradox affirms that the EU is excellent in AI science, but lacks AI and digital expertise in industry and investments. To ensure that AI is developed in line with EU values, the most pressing issue is as such not the definition of good AI, but the need for skills, ability, multi-disciplinary collaboration, funding and innovation diversity. Only then we will have an influence on ‘good’ AI, Data and Robotics, and AI, Data and Robotics for the Common Good.
AI, Data and Robotics are designed, developed and rolled out on a worldwide scale. Yet local environments like cities are at the forefront when it comes to dealing with issues society faces such as climate change, Covid, urban mobility, etc. Cities have therefore taken leading roles in policy innovation. As Brussels Capital Region, it is important not to reinvent the wheel but to update local, regional and national policies based on best practices from pioneering cities such as Barcelona, Montreal, Amsterdam, New York and London.
Barcelona, which focuses on moving from strategy to action, uses the “technological humanism” concept that puts people before technology. The discussion on AI governance is therefore a collective participatory debate that considers who will benefit or will be at risk from AI. On the other side of the ocean, the AI for Humanity team at Mila, Quebec’s Artificial Intelligence ecosystem in Montréal learnt that it is vital to create an organisational culture and structure around responsible AI and that, even though interdisciplinary work is difficult, it is essential. Furthermore, they emphasised that equity, diversity and inclusion matters need to be intentional and that a technological fix should be avoided.
During this panel, the members made policy recommendations for the Brussels Capital Region based on their own local experiences and expertise. The overarching themes of the advice were as follows:
Mobilisation of the AI community is needed to solve relevant issues while embracing the challenges of real-world datasets and collaboration with experts in the field.
Returning to the question on defining AI for the Common Good, this panel talked about responsible AI for good and suggested that there is no AI for the Common Good if it is not responsible. Moreover, questions were raised about who is responsible for the failure of AI projects and how to ensure that the functionality of the technical solution is never an afterthought. Answers to these questions include the need to raise awareness among the general public and policymakers about AI.
Unlike the first session, the legal experts on this panel did not spend too much time defining what AI is, but rather looked at its impact on society. In Europe, the law is closely linked to human rights and the rule of law. This implies that the goal is not to define all the details, as there is a risk of forgetting things, but to manage certain tensions. The EU AI Act follows these principles. As with other technologies, the general public and law practitioners do not need to know how AI works but the rules should focus on ensuring safety and fundamental rights.
As such, the panel discussed the potential and shortcomings of this new Act, including the mix between right-based and risk-based approaches, the removal of the role of infrastructure from the Act and how to ensure that AI abides by the law. Other related concepts that were examined were Privacy by Design – where privacy is embedded in the technology from the beginning – and conformity – what it means for AI technologies and how it relates to responsibility.
Furthermore, from the legal perspective, participatory design for AI and civic participation, including consultation and beyond, are crucial and should be made mandatory. Moreover, it is necessary to make AI model documents available to the wider public so that they can be challenged by larger groups. Lastly, cross-working between teams and disciplines is vital for ensuring a sound AI Act.
In every case where AI is used, it is important to consider the context and to look at the underrepresented group. Even when creating tools to open the ‘black box’, it is important to know for whom the explainability is developed. Especially, since vulnerability is plural where people who are the most in need of AI solutions are also those who can suffer the most negative effects.
Many questions remain open. How is the AI Act going to ensure the right “upstream design” – what data, how to curate it, how to benchmark it, type of algorithm, checking mathematically what the system does, how to validate and test against new data. Yet, an increasing number of regulations in the EU revolve around the digital world (e.g. the Digital markets Act, Digital services Act, AI Act, etc.). The topic is here to stay.
1 / 3
The matter of sustainability of robotics touches on many different aspects including environment, inclusivity, accessibility, capacity, safety, effective human-robot collaboration and more. Here as well, there is a subtle difference between robotics for sustainability and sustainable robotics. Think for example about drones that capture the growth of trees to measure climate change or adaptive and smart hardware and human-robot collaboration that improve workplace safety. Both have their merit and are essential considering contemporary challenges.
The old paradigm states that engineering and designing technology and robotics are about functionality and fulfilling legal requirements. However, this ignores the non-neutrality of technology and the power of designers. Solving difficult issues with technology in a narrow way can result in unexpected outcomes. Consequently, we should not only look at the quantifiable hard impacts, but also at the soft impacts including privacy, trust, responsibility and paternalism. In other words, ethics should not be an afterthought.
Furthermore, to create truly sustainable robots, two main environmental considerations should be taken into account. Firstly, make use of renewable materials in building and, secondly, make mining more sustainable or with minimal cost to the environment as well as to humans. To this end, researchers are working on transient materials for sustainable robots, which are maximal biodegradable and have a minimal environmental footprint.
The members of the panel agreed that to ensure sustainable outcomes and have a positive impact on the community, all the aspects mentioned above need to be considered and that a comprehensive view on sustainability will require training.
1 / 3
Throughout the previous sessions, stakeholder involvement, especially with citizens, was mentioned as being crucial for rolling out AI, Data and Robotics responsibly. During the fifth session, the panel members shared best practices of citizen engagement. These included platforms such as the Stanford Online Deliberation Platform and Amai!, as well as jointly created policy recommendations such as the Montréal Declaration for Responsible Development of AI and the UNESCO Recommendation for AI Ethics.
(AI-moderated) online platforms and tools make it possible to engage large parts of society. They encourage the public to engage by asking questions, giving feedback and connecting them with experts. Moreover, citizens who engage in the debates on the Stanford Online Deliberation Platform can also receive support when needed, e.g. financial support, child benefit, WiFi support, food and equipment support.
Furthermore, increasing public awareness about AI and Data, and collaborating with public broadcasters provides the public space to discuss these topics. The Amai! project accomplishes this by asking citizens about their ideas for AI solutions and their opinions about which projects to prioritise. Subsequently, they invite relevant stakeholders to implement the ideas that citizens shared.
At policy level, citizens can be included by introducing them to relevant information about AI and involving them with common social and political issues. Additionally, they can be engaged by creating open dialogues with experts and by asking them to envision long-term perspectives. Good local facilitators are crucial here. On the one hand, they understand the culture and the context. On the other hand, they are well-placed to recruit local participants to assess the recommendations.
Responsible AI is clearly not reserved only for experts; all stakeholders have a role to play. Citizens will be motivated to participate when the outputs of the process are clear. As such, a declaration by citizens, organisations and institutions should be drafted.
The last session of the first FARI Conference concluded by looking once again at the role of AI, Data and Robotics for cities, particularly for the Brussels Capital Region. The panel talked about how cities can make the most of the age of digitalisation and the Internet of Things, based on examples from Antwerp, Amsterdam and Italy. A host of information is accessible, it may just get lost between the different sensors or hidden in between the enormous amount of data.
The session looked at practical examples such as digital twin cities and 3D point clouds. A digital twin city is a common online platform that contains data from many sensors to capture the real-life status of water, air, etc. Likewise, 3D point clouds provide a fine-grained view of what is happening real-time and where things are located in the city. These tools can play a role in maintaining public infrastructure, in monitoring tree health, in adapting to the climate and in accessing the city. Some of these functionalities are run by deep learning algorithms.
However, the difficulty to share data is a key issue. In the case of cities, there are specific issues related to finding qualitative data from different contexts and regions. The issue of building good data flows is not only a technical issue but also a human and political issue. It is, therefore, just as important to boost human capacity. Furthermore, while smart cities bring many opportunities to improve the quality of life and to battle certain urban issues, high energy consumption is a concern. When designing and rolling out these tools, the central question is: how can this AI be used for the people and the common good?
Nevertheless, the use of a digital twin city and 3D point clouds provides a rich data source, an accurate map of public areas, and is able to monitor real needs and issues that can help make well-informed policy decisions. Moreover, building participatory mappings means providing real-time information for users, especially for women, so they can navigate public spaces for their safety, mobility and health.
In all sectors and regions, Artificial Intelligence (AI), Data and Robotics play a significant role in shaping our economies and societies. Against this backdrop, it is crucial for Brussels Capital Region to create a comprehensive AI policy to reap the fruits of their opportunities and deal with their potential pitfalls. Many other urban regions and research institutions are working on these topics and are willing to share their experiences and insights. Furthermore, expertise in AI and its governance is available ‘in-house’, in Brussels universities and at the FARI Institute. Brussels Region is well-placed to create legislation and frameworks to ensure responsible AI, in private and public sectors.
Given the clear-obscure duality of AI, Data and Robotics, the policies that govern them are crucial, especially for urban regions like Brussels where AI’s positive and negative aspects can be centre-staged. The use of AI should therefore fit its purpose and not exceed what is necessary to achieve this. AI, Data and Robotics need to be considered as enablers, not solutions. The question is: how do we define fit for purpose?
At European level, the Commission is currently finalising an AI Act that takes on a risk-based approach. This will impact policies at every level and create a framework for AI Governance. Nevertheless, regions have an important role to play to transpose this to their local context. It is important to note that developing an extensive AI policy does not just mean signing a charter, it is also a commitment for increasing AI literacy, for governing with stakeholders, for determining responsibility, for insisting on multi-and interdisciplinary collaboration and for enabling data flows.
Based on the local and international outlook of AI in cities, here are the key priorities for Brussels identified by the Observer Committee.
To create a comprehensive AI policy, it is critical to first raise awareness and develop a common understanding and language regarding AI technologies. Thereinafter, the relationship between the various facets of AI and direct and indirect impacts can be discussed. These debates can take place, for example, during a Brussels’ Challenge where critical questions around AI and its related issues are tackled and addressed in local communities.
The concept of ‘AI Localism’ calls for calibrating algorithms and AI policies to local circumstances, so that policymakers have a better chance of creating positive feedback loops that will result in greater effectiveness and responsibility. Brussels needs to lead the way in participatory policy design and sustained civic engagement to identify the real needs of people (guided by examples from Stanford and AMAI). This also includes coordinating stakeholders and taking everyone’s points of view into account, in particular those of marginalised communities. In other words, an inclusive approach should be favoured to ensure desirable applications and systems are developed.
New technologies come with new responsibilities. Initiate debates and promote collaboration to determine who is responsible for potential failures. Who will be protected by AI policies? The EU AI Act is a good start, but what does this mean for Brussels Capital Region? Pursuant to the previous recommendation, debates around these questions should involve various stakeholders such as citizens, companies, civic associations, local authorities, universities, etc.
This is as vital as it is difficult. As even in a single project or institution, people from different walks of life and disciplines have to assess the different stages in developing an AI tool. Multi- and interdisciplinary collaboration should be fostered between various teams and stakeholders as well as across the different levels of governance (i.e. local, regional, federal, international).
To imagine excellent, responsible (AI) technologies, data is key. Gathering data is however still haphazard, with many partners holding onto their data or not knowing how to get the most out of it. The Brussels Capital Region can play a crucial role in facilitating data flows responsibly, while embracing the challenges of real-world datasets and collaboration with domain experts.
Many other regions and institutions have created AI guidelines, principles, regulations and lessons. They could be used to initiate conversations with relevant stakeholders and partners in Brussels. There is no need to start from a blank page, the Brussels Capital Region should reuse and adapt. Look, for example, at what pioneering cities such as Amsterdam, Barcelona, Copenhagen, Helsinki and Montréal are doing as well as at documents and principles from GovLab/NYU and UNESCO on how to create Responsible AI (policies).
1 / 12
The Brussels context calls for the integration of different policy plans, including current regional innovation and infrastructure plans. These include important public procurement that public authorities can use as a market tool to govern and influence AI development. Moreover, transparency is key and could be reinforced through the creation of a region-wide algorithm registry like in Amsterdam and Copenhagen. Committing to the above should lead to a more responsible and desirable development of AI, Data and Robotics technologies for the Common Good.
Share
Related projects
Other publications
Date
DEC 2024
Researchers
Journal Article
Poster: A Framework for Developing Legally Aligned Machine Learning Models in Finance
Date
DEC 2024
Researchers
Journal Article
Delegation to artificial agents fosters prosocial behaviors in the collective risk dilemma
Date
MAY 2022
Researchers
Date
NOV 2024
Researchers
Journal Article
Assessing Responsibility in Digital Health Solutions that operate with or without AI - 3 Phase Mixed Methods Study
Date
APR 2024
Researchers