Academic perspective: competitions and learning penetration testing

In a new direction for my blog, I’ve decided to occasionally take a look into the academic world to identify developments of interest that we can translate into the profession. Or, at the very least, be able to do a level of read-across and see if there are lessons to be learned.

My first paper in this series is written by Kevin Bock, George Hughey, and Dave Levin, from the University of Maryland, entitled “King of the Hill: A Novel Cybersecurity Competition for Teaching Penetration Testing” published in USENIX ASE-18, 2018.

Abstract

“Cybersecurity competitions are an effective and engaging way to provide students with hands-on experience with real-world security practices. Unfortunately, existing competitions are ill-suited in giving students experience in penetration testing, because they tend to lack three key aspects: (1) pivoting across multiple machines, (2) developing or implanting custom software, and (3) giving students enough time to prepare for a lively in-class competition. In this paper, we present the design, implementation, and an initial run of King of the Hill (KotH), an active learning cybersecurity competition designed to give students experience performing and defending against penetration testing. KotH competitions involve a sophisticated network topology that students must pivot through in order to reach high-value targets. When teams take control of a machine, they also take on the responsibility of running its critical services and defending it against other teams. Our preliminary results indicate that KotH gives students valuable and effective first-hand experience with problems  that professional penetration testers and network administrators face in real environments.”


King of the Hill: A Novel Cybersecurity Competition for Teaching Penetration Testing” published in USENIX ASE-18, 2018.

This paper makes a number of observations about existing approaches to competitions, and of these I think the first and third observations are most relevant.

The first observation is that existing competitions tend to not provide opportunities to pivot across multiple machines, which I agree with to some extent but not in all cases. I suppose I would generalise this somewhat to a broader observation that realistic network designs, and more importantly, realistic security architectures, should be used. Later on in the paper this definition comes across more clearly. This can potentially be graduated in levels of difficulty in the form of different scenarios.

Preparation is probably the most important observation in the paper in my mind – existing competitions tend to present challenges without any forewarning, and while this may serve an elitist goal of identifying the most talented, it does so at the cost of reducing opportunities for learning (i.e. learning before and after the event, not just after the event).

By proposing the realistic architectures principle as essential to a penetration testing competition, the authors make the useful observation that this tends to require a higher level of strategic thinking rather than a purely tactical plan.

The idea of attack and defend, as opposed to purely attack (and compromise) is a useful emphasis the paper also makes and the design of the game to require victors to continue to offer services to the network is an interesting one.

The addition of vulnerable high-scoring machines during the game is also interesting, and this promotes an emphasis on continuous scanning. This is of course unlikely to be an activity attackers will periodically do, but as the paper notes, it is very representative of the challenge faced by NOCs from a defensive perspective.

An automated score bot and point allocation policy (e.g. for non-respond to pings) provides the basis for capturing realistic tallies of team performance, and is a useful addition.

They also have some publicly available resources at https://koth.cs.umd.edu.