Multiple Open PhD positions (with Prof. Bartocci) in the Doctoral College on Secure and Intelligent Human-Centric Digital Technologies and ProbInG Project

We are looking for PhD students with formal/mathematical modeling and analysis interested in one of these topics:
  • Runtime Assurance and Simplex Architecture for Machine Learning components
  • Verification/Falsification Analysis of Machine Learning components
  • Automated Analysis of Probabilistic Programs
  • Verification of Probabilistic Hyperproperties
  • Statistical Model Checking for Cyber-Physical Systems
  • Anomaly Detection

(project leader Prof. E. Bartocci @ CPS research division)

I am looking for applications from qualified students (Master-level) interested in doing a PhD in applied formal methods for CPS safety and security, preferably with existing knowledge in machine learning or/and control theory and/or formal verification. There are no teaching obligations associated with this position.

Salaries are according to the standard FWF scheme (Doctoral candidate, 30h/week). Multiple positions available.

Please send applications directly to Ezio Bartocci by email.

Description of the Doctoral College SecInt

Digitalization is transforming our society, making our everyday life more and more dependent on computing platforms and online services. These are built so as to sense and process the environment in which we live as well as the activities we carry on, with the ultimate goal of returning predictions and taking actions to support and enhance our life. Prominent examples of this trend are autonomous systems (e.g., self-driving cars and robots), cyber-physical systems (e.g., implanted medical devices), apps in wearable devices (e.g., Coronavirus contact tracing apps), and so on. Despite the interest of stakeholders and the attention of the media, digital technologies that so intimately affect human life are not yet ready for widespread deployment, as key technical and ethical questions are open, such as trustworthiness, security, and privacy. If these problems are not solved, supposedly intelligent human-centric technologies can lead to death or to other undesirable consequences: e.g., the learning algorithms of autonomous cars can be fooled so as to cause crash accidents, implanted medical devices can be remotely hacked to trigger unwanted defibrillations, and contact tracing apps can be misused towards an Orwellian surveillance system or to inject false at-risk alerts.

The goal of SecInt is to develop the scientific foundations of secure and intelligent human-centric digital technologies. This requires interdisciplinary research, establishing synergies in different research fields (Security and Privacy, Machine Learning, and Formal Methods). Research highlights brought forward by the synergies across projects include the design of machine learning algorithms resistant to adversarial attacks, the design of machine learning algorithms for security and privacy analysis, the security analysis of personal medical devices, the design of secure and privacy-preserving contact tracing apps, and the enforcement of safety for dynamic robots.

The research development is accompanied by a supporting educational and training programme, which encompasses the ethics of secure and intelligent digital technologies, interdisciplinary technical knowledge, as well as internships in international elite research partners, which expressed interest to collaborate with SecInt.