11 April 2024:
Marta Kwiatkowska,
Gethin Norman and
Dave Parker
have won the 2024
ETAPS Test-of-Time Tool Award
for
PRISM.
For more details, see
here.
22 February 2024:
Marta Kwiatkowska gives a keynote on “Strategy synthesis for stochastic games with neural perception mechanisms”
at the CSL conference, Naples, Italy, Follow this
link for more details.
15 February 2024:
Marta Kwiatkowska – gives an online presentation at the NIST AI Metrology Colloquium Series entitle
“Safety and robustness for deep learning with provable guarantees”, Follow this
link for more details.
2 February 2024:
Marta Kwiatkowska – gives a keynote on “When to Trust AI…” at the first Danish Digitalization,
Data Science and AI conference (D3A), Nyborg, Denmark, Follow this
link for more details.
November 2023:
PRISM-games 3.2 is now out,
including symbolic model checking of turn-based stochastic games, correlated/fair equilibria and more.
Further information
here.
October 2023: A postdoc position is available now at Oxford on
the FAIR project.
Applications close on 28th November 2023. Follow this
link for more details.
October 2023 : We are hiring a Research Associates (x2) on project Fun2Model! For full details
click here. Vacancy is full time, fixed-term and to start ASAP. Applications close on 6th November 2023.
July 2023:
PRISM version 4.8 is now
available,
including support for robust verification of uncertain models (interval Markov decision processes and Markov chains),
improved strategy/policy generation and much more.
More information
here.
April 2023 :
Marta Kwiatkowska has been elected as a member of the American Academy of Arts and Sciences. Founded in 1780, the Academy honours excellence and convenes leaders from every field of human endeavor to examine new ideas, address issues of importance to the United States and the world, and work together, as expressed in their charter, ‘to cultivate every art and science which may tend to advance the interest, honour, dignity, and happiness of a free, independent, and virtuous people.’Current members represent today’s innovative thinkers in every field and profession, including more than two hundred and fifty Nobel and Pulitzer Prize winners. Marta has been invited to a formal induction in September, to be held in Cambridge, MA, where the Academy’s headquarters are located.
More information and list of current Academy members is available here:
American Academy of Arts and Science 2023.
Congratulations Marta!
October 2022 - Papers: Two papers accepted at NeurIPS-2022!
-
In this paper we study learning models where the learner is given more power through the use of local queries, and give the first distribution-free algorithms that perform robust empirical risk minimization (ERM) for this notion of robustness.
-
In this paper, we propose a robust anytime learning approach for Markov decision processes based on a Bayesian approach to learning probability intervals.
May 2022 - Papers:
Two papers accepted at ICML'22!
-
In this paper we present Tractable Uncertainty for STructure learning (TRUST), a framework for approximate posterior inference that relies on probabilistic circuits as the representation of our posterior belief.
-
In this paper, we analyze the learning dynamics of temporal difference algorithms to gain novel insight into the tension between these two objectives.
May 2022 - Paper:
Paper accepted at UAI'22:
we present novel techniques for neuro-symbolic concurrent stochastic games, a recently proposed modelling formalism to represent a set of probabilistic agents operating in a continuous-space environment using a combination of neural network based perception mechanisms and traditional symbolic methods.
April 2022 - Papers:
Three papers accepted at IJCAI 2022!
-
Here we consider the problem of certifying the individual fairness (IF) of feed-forward neural networks (NNs). In particular, we work with the =E5-=E4-IF formulation.
-
In this paper we address the fundamental problem in adversarial machine learning to quantify how much training data is needed in the presence of evasion attacks within the framework of PAC learning, focusing on the class of deci- sion lists.
-
In this paper we develop a method to quantify the robustness of decision functions with respect to credal Bayesian networks, formal parametric models of the environment where uncertainty is expressed through credal sets on the parameters.
Jan 2022 - Paper:
Paper accepted at TACAS'22,
showing how we implement algorithms for both normal form games and the more complex case of multi-player concurrent stochastic games with temporal logic specifications.
November 2021 - Event:
Marta gives keynote lecture at
Machine Learning in Poland 2021 titled "Safety and Robustness for Deep Learning with Provable Guarantees" and participates in a panel discussion titled 'Women in Machine Learning in Poland'.
October 2021 - Welcome:
We are pleased to welcome researcher,
Matthew Wicker, to the FUN2MODEL project. Matthew will focus on probabilistic verification and synthesis for Bayesian neural networks.
September 2021 - Event:
Marta gives keynote lecture at
ECML PKDD 2021 titled "Safety and Robustness for Deep Learning with Provable Guarantees".
September 2021 - Award:
Marta has been awarded the
Van Wijngaarden Award 2021 for Computer Science in recognition of her numerous and highly significant contributions to preventing software faults. The five-yearly award also recognised mathematician,
Susan Murphy, for her work in improving decision making in health. The
Van Wijngaarden Award was established by
CWI, and is named after former CWI director, Aad van Wijngaarden. Congratulations Marta!
September 2021 - Welcome:
We are delighted that Senior Research Fellow,
Gethin Norman, has joined the FUN2MODEL project. Gethin specialises in modelling the formalisms, theories, logics and algorithms that underpin the
PRISM model checker.
August 2021 - Event:
Dave Parker
gives an invited tutorial at
EXPRESS/SOS 2021
on "Probabilistic Verification of Concurrent Autonomous Systems".
May 2021 - Paper:
accepted at
UAI 2021!
Here we propose a framework to provide safety certification for given control policies, and synthesize control policies that improve the certification bounds.
May 2021 - Event:
Marta was awarded the prestigious 2019 BCS Lovelace Medal for her research in probabilistic and quantitative verification and so led the 2020 Lovelace Lecture “Probabilistic model checking for the data-rich world.” A recording can be found
here.
The BCS Lovelace Lecture was sponsored by The Ada Lovelace Institute
May 2021 - Event:
Marta gives a keynote speech at
SAC 2021 titled "Advances and Challenges in Quantitative Verification for Adaptive Systems".
May 2021 - Event:
Marta gives a keynote speech at
FSEN 2021 titled "Safety and Robustness for Deep Learning with Provable Guarantees".
April 2021 - Papers:
three papers accepted at
IJCAI 2021!
-
Here we build on abduction-based explanations for machine learning and develop a method for computing local explanations for neural network models in natural language processing (NLP).
-
Here we demonstrate provable guarantees on the robustness of decision rules, paving the way towards provably causally robust decision-making systems.
-
Here we introduce the first method for verifying the time-unbounded safety of neural networks controlling dynamical systems.
April 2021 - Welcome:
We warmly welcome DPhil Student
Emanuele La Malfa, to the FUN2MODEL project. Emanuele's work will focus on robustness and explainability for natural language processing (NLP).
March 2021 - Software Release:
PRISM 4.7 is now available, including support for POMDPs, improved accuracy reporting and
more.
March 2021 - Welcome:
We warmly welcome researcher,
Rui Yan, to the FUN2MODEL project. Rui's work will focus on probabilistic verification and synthesis, including Bayesian and psychological games.
December 2020 - News:
Marta becomes a Fellow of the European Laboratory for Learning and Intelligent Systems (ELLIS).
ELLIS Fellows advance science, provide strategic advice and leadership, and act as ambassadors of ELLIS.
November 2020 - Event:
Dave Parker gives a keynote talk at
iFM 2020 titled "Verification with Stochastic Games: Advances and Challenges".
October 2020 - Event:
Marta is a panelist at the Royal Society's Briefing for
Making Europe a Leader in AI: in conversation with Venki Ramakrishnan, Antoine Petit and Martin Stratmann.
October 2020 - Welcome:
We are delighted that graduate student,
Elias Benussi, has joined the FUN2MODEL project. Elias will focus on fairness in AI.
October 2020 - Paper:
accepted at
FORMATS 2020!
Here we propose MOSAIC, an algorithm for measuring the safety of deep reinforcement learning controllers in stochastic settings.
October 2020 - Paper:
accepted at
EMNLP 2020!
Here we focus on robustness of text classification against word substitutions.
September 2020 - Event:
Marta gives a webinar on 'Safety and robustness for deep learning with provable guarantees' in the ICE-TCS Reykjavik University series.
September 2020 - Event:
Marta gives a keynote speech at
ASE 2020 on 'Safety and robustness for deep learning with provable guarantees'.
September 2020 - Event:
Marta gives a plenary talk at
DNA26 on 'Probabilistic verification and synthesis for reliable molecular circuit designs'.
September 2020 - Event:
Marta gives a keynote speech at
KR 2020 on 'Probabilistic model checking for strategic equilibria-based decision making'.
August 2020 - Welcome:
We are delighted to have researcher,
Gabriel Santos, join the FUN2MODEL project. Gabriel will focus on strategic reasoning and game-theoretic techniques in AI.
August 2020 - Paper:
accepted at
QEST 2020!
Here we propose multi-coalitional verification techniques for concurrent stochastic games.
July 2020 - Event:
Marta gives a Webinar on "When to trust a self-driving car" at The National Academy of Sciences, India (NASI) - Delhi Chapter attended by over 440+ participants. Please see
here to watch a recording.
June 2020 - Event:
Marta has been invited to sit on the Global Partnership on Artificial Intelligence (GPAI) Working Group on Responsible AI, nominated by the European Commission.
GPAI is an international and multi-stakeholder initiative to guide the responsible development and use of artificial intelligence consistent with human rights, fundamental freedoms, and shared democratic values, as reflected in the OECD Recommendation on AI.
Please see here for more details.
May 2020 - Paper:
accepted at
UAI 2020!
Here we show how to compute worst-case adversarial guarantees for Bayesian Neural Networks (BNNs).
May 2020 - Paper:
by
Clare Lyle accepted at
ICML 2020!
Here we consider the problem of learning abstractions that generalize in block MDPs, families of environments with a shared latent state space, and dynamics structure over that latent space, but varying observations.
April 2020 - Welcome:
We are delighted to welcome researcher,
Andrea Patane, to the project. Andrea brings with him expertise in safety verification of Bayesian models and the role played by uncertainty in adversarial prediction settings. He will focus on data-driven perception modelling and cognitive reasoning, for integration within agent-based models.
April 2020 - Paper:
accepted at
CAV 2020!
Here we present a major new release of the PRISM-games model checker, featuring multiple significant advances in its support for verification and strategy synthesis of stochastic games.
Download the tool and case studies here.
February 2020 - Paper:
by
Min Wu accepted at
CVPR 2020!
Here we consider the robustness of deep neural networks on videos.
January 2020 - Paper:
accepted at
AISTATS 2020!
Here we show how to compute worst-case adversarial guarantees for classification with Gaussian processes.
January 2020 - Event:
Marta gives a keynote speech at
ERTS 2020 titled 'Safety verification for deep neural networks with provable guarantees'.
January 2020 - Software Release:
PRISM-games 3.0 is now available, providing concurrent stochastic games, equilibria, real-time models and many new examples. More information
here.
October 2019 - Welcome: We warmly welcome graduate student,
Emanuele La Malfa, who joins the project as an Associate Member and will formally join in April 2021. Emanuele is working on robustness and explainability of AI, with specific focus on natural language processing (NLP) models.
October 2019 - Welcome: We are delighted that graduate student,
Benjie Wang, has joined the FUN2MODEL project. Benjie is particularly interested in causal modelling as a means to enhance the robustness and explainability of deep learning.
October 2019 - Welcome:
We are excited to appoint postdoctoral researcher,
Luca Laurenti, to the project. Luca will work on on developing probabilistic verification and synthesis methods for deep learning, with a particular focus on Baysian neural networks.
1st October 2019 - The Fun Begins!
FUN2MODEL kicks off and work begins as we aim to make major advances in the quest towards provably robust and beneficial AI.
March 2019 - Announcement: We are delighted to announce that
Professor Marta Kwiatkowska has been awarded a highly competitive European Research Council Advanced Investigators Grant for a new five-year project FUN2MODEL.
Our objectives are to develop novel probabilistic verification and synthesis techniques to guarantee safety, robustness and fairness for complex decisions based on machine learning, formulate a comprehensive, compositional game-based modelling framework for reasoning about systems of autonomous agents and their interactions, and evaluate the techniques on a variety of case studies.
This is the second ERC Advanced Grant awarded to Marta Kwiatkowska: from 2010 until 2016 she held the grant VERIWARE.
Please see the press release
here and
here for more details.