Comparing Abstract Interpretation and Linear Programming Based Verifications of Neural Networks
||Comparing Abstract Interpretation and Linear Programming Based Verifications of Neural Networks|
||SNIC Small Compute|
||Ahmed Rezine <firstname.lastname@example.org>|
||2022-10-25 – 2023-11-01|
Formal guarantees for the correctness of Machine Learning based algorithms driving safety-critical systems, e.g., cyber-physical systemsor smart medical implants are badly needed. Two concrete examples are autonomous driving in unpredictable environments and real-time monitoring of patients for medical emergencies (e.g.,heart attacks or epileptic seizures).It is for instance unacceptable for an ML-based mobile cardiac device, to miss a typical heart-attack or epileptic seizure. This project is part of an effort to build new verification techniques for ML based algorithms.
In this project, we will compare the efficiency of state of the art verification methods (in particular, those based on modifications of SMT solvers (such as Reluplex and Maraboul) and those based on Abstract Interpretation (such as Eran). This will allow us to identify the tradeoffs and to propose original verification techniques that are more scalable than the former and more precise than the latter.