Application Development Blog Posts
Learn and share on deeper, cross technology development topics such as integration and connectivity, automation, cloud extensibility, developing at scale, and security.
cancel
Showing results for 
Search instead for 
Did you mean: 
LucaCompagna
Advisor
Advisor
2,304
I had the pleasure to attend the NDSS 2022 on 24 – 28 April, 2022 in San Diego where our research paper “Testability Tarpits: the Impact of Code Patterns on the Security Testing of Web Applications” was accepted and presented. Besides taking place in a very nice location, NDSS confirmed my high expectations with respect to both content and networking.

In this blog I will mention what NDSS is about, why we publish our research work there and a few content-related highlights that I found interesting.

What is NDSS?


NDSS stands for Network and Distributed System Security. It is considered by the community as one of the top-tier four scientific security conferences (see e.g., Computer Security Conference Ranking and Statistic). It is sponsored by major IT companies (e.g., in 2022: Google, IBM, Microsoft, Intel, Qualcomm, …) and it gathers more than 500 security experts from top research institutes and business companies.

Personally, I believe the “network” keyword is more there because of legacy/historical reasons than for content related ones. Indeed, when you look at the statistics on accepted papers by topic (see image below borrowed from here), the variety of subjects is quite large and similar to that of other top security conferences.


Interesting enough the hype on machine learning is far from being gone…more than 130 papers submitted in that area, though less than 20 accepted…

Other topics had better acceptance rate, but still overall many papers are submitted (511 in 2022) and only few are accepted (16%).

This is quite in line with the other three top security conferences.

Why do we publish our research work there?


How do you know if your research work is novel, innovative, sound, impactful, scientifically relevant, good quality? How do you assess it? Peer-review is the process that the scientific community put in place to evaluate all these important criteria. Your work gets scrutinized by three/four renowned experts that do not miss an opportunity to spell out your work’ weaknesses. If your work is not well presented, it will be rejected. If your experiments are not done properly (sound, large enough, repeatable, …), your work will be rejected. If your scientific impact is low, your work will be again rejected. Top-tier conferences, like NDSS, tend to have the toughest peer-review processes. These conferences receive a lot of submissions and have a low acceptance rate (often < 20%). Hereafter some data borrowed from the Computer Security Conference Ranking and Statistic.































IEEE S&P ACM CCS USENIX Security NDSS
2021 12.1% (115/952) 22.3% (196/879) 18.7% (246/1316) 15.2% (87/573)
2020 12.4% (104/841) 16.9% (121/715) 16.1% (157/977) 17.4% (88/506)
2019 12.0% (84/679) 16.0% (149/934) 15.5% (113/729) 17.0% (89/521)


The peer-review process is not perfect. The community is making many efforts to improve it, also experimenting different approaches, but this is another story.



These numbers show that researchers in the security community strive for publishing their work in the four top-tier conferences. That is the way to expose your work to the entire community and being recognized as a thought leader research team. Researchers do not have the time to read all the papers published, but they will look at those accepted at the top conferences. Our mission at SAP Security Research is to conduct applied research with the goal of providing organic security and privacy innovation for SAP. Most of our ideas mature via collaborations with external researchers and by being exposed to the community to get feedback. Publishing to NDSS and similar conferences is thus a first step for us to ensure our idea can work and trigger for further internal activity at SAP to push outcomes to be transferred into the company.

An opportunity for scientific exchange


Going to the conference in-person is a great opportunity to further disseminate your work and to get exposed, in a few days, to recent innovations, trends, and concerns in the field of security and privacy. But, even more important, it is a fantastic networking experience for scientific exchange.

The Covid pandemic made the scientific community experiencing virtual conferences. I attended myself a couple of them. You can absorb information and knowledge, but it is not even half the experience you can get from attending in-person.

Let me provide a small example. Recently, our team has started investigating the state-of-the-art in fuzzing for Web API to see whether we can complement and enrich what SAP is doing internally in that area (e.g., https://github.com/SAP/odfuzz). At NDSS I could attend an entire workshop on fuzzing. Besides the interesting talks at the workshop, I could have extensive chats with three fuzzing experts, once at breakfast, once in a coffee break on comfy laying beach chairs, and once at dinner.  I got three different and sometime diverging opinions (:-)) of what could be the interesting direction to explore in our SAP context, but those discussions covered and answered points that you cannot read on papers and opened channels for future collaborations with amazing people.

What did I find interesting there?


Besides the fuzzing workshop, there were many other interesting presentations at the main NDSS conference as well as in the other co-located workshops (e.g., Usable Security and Privacy, USEC). Presentations were recorded and I hope they will be made available soon for everyone. (For the last edition they were released on a youtube channel few months after the event.)

Hereafter, I will limit myself to just advertise the talk about our paper “Testability Tarpits: the Impact of Code Patterns on the Security Testing of Web Applications” by Feras Al-Kassar (where we show how we can detect more vulnerabilities with static analysis) and to briefly discuss the two keynotes.

Measuring Security Outcomes


Keynote by Alex Gantman (Qualcomm Technologies Inc.)

Are we in a position to claim that security has improved in the last 10 years? Alex conveyed very interesting insights in this respect, questioning whether we are using the right metrics to measure security improvements, and discussing analogies with other areas. For instance, in car safety, besides crash tests and similar, the success of a new safety measure (e.g., safety belt) is also evaluated against harm experienced by end-users (e.g., drivers/passengers).  Data about fatalities due to car crashes is carefully monitored and, once normalized over the population, enables to claim that car safety improved by a factor of three over the last 40 years.  What can we say for security outcomes? We have the feeling that status-quo improved, we worked a lot for that right, but how can we claim it without proper measurement? In other word, how can we provide some economical evidence that the benefit of the security outcomes exceeds the cost encountered while putting in place security measures? Computing the cost is easy (e.g., license cost for a security testing tool, pentest third-party analysis, etc.), but computing the benefit is difficult. However, this difficulty should not be used as an excuse to not even try. The security community is thus invited to measure more systematically the benefit of the security outcomes they provide.

In the meantime, we have compliance and standards in place. The drawback, in my humble opinion, is that we end up in crazy situations where a tick in the compliance report may become more palatable than the benefit originated by that tick. In addition, we bow to the external expectation pressure and conform to what everyone else does. Which organization nowadays can take some bold decisions about using innovative internal tools rather than just buy the licenses of the tools used by all the others?

All in all, a very inspiring talk from Alex!

Will Cryptographically-secure Anonymous Communication Ever be Practical?


Keynote by Srini Devadas (MIT)

Great presentation from Srini that presented what it has been done so far for cryptographically-secure anonymous communication systems. He compared the core works in this area over two main conflicting dimensions privacy and practicality (see image below borrowed from here).


For instance, Tor is discussed as the only widely used anonymity system. Unfortunately, Tor has a substantial amount of metadata leakage, limiting the privacy it can provide to users. So, while Tor scores very well over the practicality dimension, it does not so much over the privacy one. The research challenge is thus whether we can reach higher privacy while keeping the system practical. In the last part of the presentation Srini discussed a new system called Lightning which aim to bring cryptographically-secure anonymous communication a step closer to practical deployment. Looking forward to seeing the details of Lightning and its further deployments.

Contact




Discover how SAP Security Research serves as a security thought leader at SAP, continuously transforming SAP by improving security.


Luca Compagna, research expert at SAP Security Research, luca.compagna

Labels in this area