adesso Blog

I began my last blog post with the words "Regardless of possible residual risks...". It is precisely these residual risks that I would like to focus on today. To do this, I will first define when we can speak of residual risks and then take a closer look at precisely these risks.

Residual risks - a brief definition

In economic terms, residual risks can be defined as the risks that I can bear as a company or as an individual. In other words, "my" risk exposure is acceptable to me and the materialisation of such a risk may be painful, but has no serious impact on my company or me as an individual. This is essentially the aim of a risk management process. Of course, there are interesting cases where a catastrophic event has a very, very low probability of occurrence, for example the probability of a meteorite falling on my head. But more on that another time.

I don't want to go into this economic aspect any further - even though, strictly speaking, it is relevant to what I have to say. Of course, the development and operation of software is also part of economic risk management.

I am more interested in the maximum perspective: What risk remains if I consistently exploit all technical and organisational possibilities? I am aware that I am probably overstepping the bounds of what is necessary.

In the next step, I would like to briefly discuss what would have to be done in order for me to be able to speak of technically and organisationally unavoidable risks.

Prerequisite for achieving the minimum risk

One prerequisite for this is the implementation of a secure software development process and the most secure operation possible (in combination often referred to as DevSecOps). Without going into too much detail, this includes the following aspects:

  • Fully understanding the security requirements and implementing the appropriate measures.
  • Carrying out a design analysis or threat modelling (also multiple times).
  • Execution of tool-supported static code analyses and selective supplementation with manual code reviews and elimination of possible vulnerabilities.
  • Continuous dependency management (sometimes also called software composition analysis) to patch 3rd party libraries with vulnerabilities within 24 hours if possible.
  • A hardened operating and network environment.
  • Security-checked configurations.
  • Secure handling of keys and credentials.
  • Defined, implemented and tested recovery times (e.g. RPO, RTO or WRT).
  • An adequate authorisation concept.
  • A comprehensive security test concept - including possible dynamic analyses and penetration tests.
  • Not forgetting expertise in the evaluation of security mechanisms, vulnerabilities and possible solutions.
  • Measures to protect against DDoS and DoS attacks.

This presentation is certainly somewhat abbreviated, but is intended to show that all known measures are simply implemented with expertise.

Are there any remaining risks?

The simple answer is "yes". Once again, without claiming to be exhaustive, here are some of the remaining risks:

  • Zero-days (as mentioned in my last blog post), i.e. vulnerabilities mainly in 3rd party libraries that become known without a patch being directly available.
  • A critical vulnerability in an indirectly used library.
  • False negatives - for example, vulnerabilities in code, viruses etc. that are not recognised by tests and checks.
  • Wrong decisions, misjudgements - for example, true positives that are misinterpreted.
  • Abuse of rights, i.e. an employee uses his or her rights to attempt fraud or the aggravated version of fraudulent collusion.
  • Human error - an employee makes a serious mistake out of carelessness or in haste.
  • DDoS / DoS attacks with a very high volume.

For most of these residual risks, there are, of course, ideally already compensatory measures in place. For example, you can work as a team to triage vulnerabilities or monitor critical processes in order to recognise attempted fraud after the fact. In my opinion, however, what they all have in common is that they cannot be completely eliminated.

This is where economic considerations come into play. In most cases, the residual risk should be within acceptable limits.

What does the reality look like?

My hobby is the creation of hazard models and risk analyses, so I have been able to support a number of projects over the years. In the almost 15 years that I've been doing this, a lot has happened, but it has to be said that in the vast majority of projects, the level described above has not yet been reached.

As always, there are also very positive examples. Specifically, I have had two customers in the last few months that I would actually consider to be very close to this "optimum". But what do you do when you are already very well positioned?

First of all, of course, security is a process, which means that you must not slacken your diligence and efforts. To minimise the risk of becoming too routine in dealing with security issues, I typically recommend the following measures:

1) In-depth training, i.e. in addition to the more general awareness measures, actually conduct training sessions with new content for the teams. For example, instead of repeating the classic OWASP TOP TEN Secure Coding training, you can also recommend hacking training for developers.

2) Focus on individual "corners". With my threat models, I am often initially travelling at a higher altitude. The idea is now to take an iterative, targeted look at individual functions - this can be a code review, an in-depth threat model or a very specific whitebox pentest.

3) Change roles and perspectives. To avoid routine, it is a good idea, for example, to use security experts and specialists in a kind of rotation process in projects. Personally, I always enjoy learning from experience and also from exchanging ideas with other colleagues, who often have detailed knowledge that I lack. A new security expert brings new questions and perspectives with their knowledge.

4) Try out new methods. Red and blue teaming with a gaming aspect across projects is an option that has been around for some time. Or you can, for example, modify the threat model method if you prefer "manual" methods, use a tool or play the Elevation of Privileges card game.

5) Praise and recognise. Sometimes it can be enough to mention good work in the area of security in a positive light. There is enough bad news in this area.

Conclusion

My conclusion should come as no surprise. There is no such thing as one hundred per cent security, no matter how hard we try. However, with sensible processes and sufficient expertise, we can achieve a pretty good level that should be maintained. Basically, I believe that we are on the right track in the IT sector. Technology trends such as AI in security can help us, but unfortunately we are often still sitting on a huge mountain of technical security debt. There is still a lot to do, let's keep on marching.

Would you like to find out more about security topics at adesso and what services we offer to support companies? Then take a look at our website.

You can find more exciting topics from the world of adesso in our previous blog posts.

Picture Oliver  Kling

Author Oliver Kling

Oliver Kling is Competence Center Leader Application Security in the field of IT Management Consulting at adesso and works with his security colleagues to expand and further strengthen the portfolio in this area. His personal hobbyhorse is threat modelling; he has acquired and refined the necessary skills and the corresponding knowledge about secure design in over 100 analysis workshops.

Category:

Methodology

Tags:

IT-Security

Save this page. Remove this page.