How do we develop the next generation of Cyber Specialists?

NCC have put up an interesting blog post on the challenge of developing the next generation of consultants (https://www.nccgroup.trust/uk/about-us/newsroom-and-events/blogs/2019/january/developing-the-next-generation-of-cyber-risk-consultants/).

NCC’s experience goes to show how much creativity businesses are willing to commit to the task.

It’s widely recognised that the cyber security industry has some of the most critical talent shortages at the moment, and innovative thinking is needed to try to repair the shortfall of consultants available for hire.

The first challenge is the scarcity of resource. Many more vacancies than applicants is the typical headline. This is where most media outlets and magazine reviews end their scrutiny, but underneath this there are some structural topics worthy of further thought.

The most significant is what I would call the “experience levels” problem. The cyber security industry has inherited its pool of candidates from Information Assurance and Information Security fields (setting aside the debate over whether cyber security = information security), established for some time and before cyber security was driven up in priority in the recent decade. This has given the field a large number of highly-experienced professionals, now at the pinnacle of their career, whose succession has to be planned for by organisations. These are the kinds of specialists who have an seemingly infinite understanding of the field, can effortlessly navigate complex waters, and provide the backbone of corporate cyber security programmes. This has led to a high number of experienced practitioners, but very few (hardly any) mid-career experts, and until recently hardly any early-career entrants. Inevitably, this will lead to a further challenge in the coming decade as a number of highly experienced practitioners retire from the profession or are promoted up to CISO roles to focus on strategy.

The second problem is building the “pipeline of talent”. Building the pipeline of talent has been the societal and industrial response to the perceived staffing challenges in the profession. Most approaches are focused almost exclusively on the development of cyber security degree programmes in universities, and indeed universities have turbo-charged their efforts there. The efforts are helpful, and the numbers look good on paper. But it will not deliver the transformational change needed by the sector–for the very reason that in terms of experience, it will simply address the shortfall in early-career professionals but will not address the mid-career experience shortfall or the pending highly-experienced vaccum.

What this means, broadly, in terms of the three experience levels is something like this:

  • Early-career professionals are growing in number, due to training initiatives in universities. This is good, and to be applauded. But early-career professionals need mid-career managers and team leaders to direct their efforts. Despite the intense coverage in media and press, this is really Priority #3 – the solutions is being addressed and is being managed.
  • Mid-career professionals remain incredibly scarce. These practitioners are looked on to lead teams and occupy mid-level management roles, and use their years of experience to provide a sensible and measured contribution to management activities. It will be a decade or more before the growth at the early-career level begins to expand the field of candidates for these roles, and this remains an acute problem for employers. This is therefore Priority #1.
  • Highly-experienced, late-career professionals are in the market, but the pool of candidates is dropping drastically. In 5-10 years’ time, this will become something of a predicament for the domain, as cyber threats continue to expand in number and sophistication. The loss of knowledge will be a particular difficulty to overcome. This is Priority #2, and will become Priority #1 in the next decade.

It appears to me, based on this relatively simple breakdown, that the talent challenges for the field will not be overcome for two to three decades. Within the next 5-10 years we will see the impact of losing highly-experienced leaders.

Of course, companies have not stayed still and in practice what has happened in many is that cyber security professional vacancies have been staffed up by personnel switching from other, related areas. This is a form of solution, but the lack of a systematic, naturally-progressing career path will inevitably lead to some challenges further down the line. This is perhaps where initiatives like NCC’s has a lot of merit.

There are lots of other challenges to consider as well, reflected in the broader computing field. The lack of interest in STEM subjects at university has been a particular problem for many decades now. This has probably fuelled the interest in cyber-security specific degree programmes at universities, which integrate non-STEM content into a STEM core, which is obviously more appealing than a pure-STEM degree.

In some ways cyber-security programmes have side-stepped the problem. But, promoting interest in STEM careers remains challenging, and this underlying difficulty will complicate the development of talent in cyber security for decades to come. Diversity is also a goal that many in STEM and general engineering have advocated. For example, promoting technical and engineering careers for women. This is yet another challenge within the broader computing field that will complicate the development of the profession.

Efforts by companies to develop and share their structured programmes, like NCC’s, will help navigate this incredibly complicated landscape. To me there does not appear to be a single, straightforward solution, but by sharing and communicating efforts, progress will be made.

This and other internal efforts in companies underlines the need for substantial training budgets within cyber security teams and functions. This is perhaps the most tangible step businesses can take at the moment, and will be a key determinant for applicants when selecting what is likely to be multiple, competing opportunities.

Securing email services – DMARC and TLS

Email security enhancements are an easy modification to make to corporate mail services, and standards such as DMARC and TLS are relatively straightforward ones to roll out.

DMARC initiatives over in the US are showing how effective a coordinated programme can be if executed correctly.

InfoSecurity Magazine has a good article on the subject, with some excellent statistics:

https://www.infosecurity-magazine.com/news/dmarc-adoption-surges-ahead-mandate/

SANS FOR585 Q&A: Smartphone Forensics – Questions answered

Reviewing blog posts this morning, and came across an interesting article from SANS discussing some of the nuances of forensic recovery from smartphones.

This is such a complex space at the moment, with continuing interest from smartphone manufacturers in protecting user privacy (e.g. mandatory FDE, etc.) Also take a look at the useful poster at the bottom of their blog post.

Link: https://digital-forensics.sans.org/blog/2019/01/07/sans-for585-qa-smartphone-forensics-questions-answered

Using asymmetric capabilities to secure files using GPG

In my previous posts I discussed how GPG can be used to encrypt a large file using a symmetric key (passphrase). Provided keys are changed regularly, this approach has clear advantages in terms of simplicity, speed, and authentication of the parties.

However the loss or theft of the key would break the security assumptions allowing attackers to snoop on data and potentially impersonate the origin host. A brute-force attack can easily be mounted against symmetric keys.

In addition, the key change procedure required to manage risk is laborious and the task of establishing keys and information exchange processes does not scale particularly well – to the square of the number of parties involved in fact O(n^2) where n is the number of communicating parties. The exchange of keys is also complicated and should be over a secure medium, face-to-face or secure couriers.

In my post I also mentioned that keys and certificates (more specifically, asymmetric cryptography), in some circumstances, could be regarded as preferable. What does this new approach offer?

Textbook answer follows! Advantages include vastly reduced complexity of the task – from O(n^2) to O(n), but that benefit will only become evident for large numbers of communicating entities. The security requirements of key distribution are much less, as the public key has no confidentiality requirement. In addition, a sender only has to possess the public key in order to encrypt a message rather than a more sensitive symmetric key. The disadvantages of pure asymmetric cryptography principally focus on vastly reduced speed, and a limited number of algorithms.

Fortunately GPG (and its commercial forebear, PGP) does not directly walk into this list. GPG does not operate in pure asymmetric mode, instead opting to use a hybrid of both techniques (A + S). However, as we will see, it does remain vulnerable to expiry risks, which presents a problem for automation.

To use GPG in this way, we can take the following steps.

Create the key pair for the recipient

Follow my previous blog post for the instructions to create a key pair for the recipient, and how to import the public key on the sending host. It is also important to create a certificate containing the signed public key on the target host.

This is a potential risk area in PGP/GPG. If a certificate has expired, the risk of encryption and transmission changes. In addition, a malicious user could generate or obtain a revocation certificate and distribute it, delivering a form of DoS.

Encrypt the file using the public key

The sending host should now have the public key of the recipient imported and signed as trusted. Along the way they would have verified the fingerprint of the public key, using a secure channel.

On the sending host, the file should be encrypted using the following command:

$ gpg --output [plaintext file].enc --encrypt --recipient [recipient email address] [plaintext file]

What has happened during this process? GPG employs its hybrid approach. It first creates a session key that is used to encrypt the file.

The hybrid key is then encrypted using the public key of the recipient. This delivers two encrypted outputs (the data and the symmetric key). Both are then packaged in the encrypted file as an octet stream in accordance with RFC 8880.

This package then makes its way to the recipient, who them uses GPG to decrypt the symmeric key and then the payload.

Decrypt the file using the private key

The recipient host can decrypt the file using the following command to use the embedded filename in the RFC 8880 file:

$ gpg --quiet --use-embedded-filename [encrypted file name]

Alternatively, the recipient can specify their own filename with either of the two (equivalent) commands:

$ gpg –decrypt [encrypted file name] > [target plaintext file name]

$ gpg –output [target plaintext file name] –decrypt [encrypted file name]

Summary

This scheme is more elaborate than the more straightforward symmetric approach, and there is greater scope for potential error (e.g. accidental disclosure of a private key).

We also encounter a lot more complexity in the ongoing management of this scheme, which may prove difficult to justify for minor applications. Assuming time-limited certificates, they will eventually expire and greater work is needed to maintain a working configuration.

PGP’s (and GPG’s) unique advantage here is the combination of both encryption approaches, which delivers speed and enhanced security in the form of public/private keys.

We are also able to make use of revocation certificates, which can ensure certificates that are compromised or superseded are no longer used.

There are many other benefits GPG offers that are not discussed in this article, including ways of signing public keys to demonstrate trust. It’s well worth taking a look and seeing what the art of the possible is.

GPG and key generation

In this post I’m walking through the process of creating a GPG key. This is needed for GPG operations that require either the public or private key for encryption or signing, respectively. I’ll be referring to this post in a future post where I talk about the combination of PKC and GPG to support secure file transfer.

Note: in this context, GPG generates keys most of the time. A certificate is created when we sign the public key using our private key. We also generate a certificate when a revocation certificate is generated at the key generation stage or afterward.

Generating the keys

The process for creating GPG keys is very straightforward, as the GPG executable will walk the user through the steps required:

$ gpg --generate-key

This produces a new key pair using a set of default parameters. This also generates a revocation certificate, which is stored within the GPG home directory.

You can also access a more comprehensive set of options by using the alternate command, –full-gen-key, which I recommend.

The options presented will be as follows:

  • Key type. The default is to generate two RSA keys. You can also produce an Elgamal key. My advice is to stick with RSA for both.
  • Key size. Lots of discussion can be found on this point. The default of 3072 is adequate, and 4096 is also an option.
  • Key expiry. Unless there are exceptionally strong requirements to have a non-expiring key (you may need mitigating controls if done so), set the key expiry to a reasonable value. The OpenPGP best practice is regarded to be less than two years. My recommendation is one year.
  • Identity (name, email address, comment). GPG will construct the identification string as “Name (comment) <email>”. There is no requirement to use a valid email address, but I’d recommend creating one that is unique and corresponds to a domain or in some way is controllable by you.
  • Passphrase. This protects access to the private key and should be a strong password.

Exporting the public key

The public key for user A must be known to user B in order for them to send information securely to A. There are no particular confidentiality requirements for the public key however the integrity of the key should be maintained. For this reason it is recommended to distribute the public key in a secure way, e.g. over HTTPS.

$ gpg --armor  --emit-version  --emit-version --emit-version --export [email address for the key pair] > output-public-key.gpg.armor

The command above outputs armored (ASCII) public key that can be sent over email. This is the predominant format for public keys on the Internet. The version string can be controlled by reducing the number of “emit-version” switches or removing them entirely.

Considerations about version strings

The version string is not used by any current implementations of GPG and disclosing version strings is usually advised against due to potential profiling of software vulnerabilities.  However, some reasons why you might want to retain the version string:

  • In case future versions of GPG change the way keys are processed, meaning the version string could be used in the future to ensure correct operation
  • Potentially improving interoperability between communicating parties in some way, along the same lines as above

Distributing the public key

The public key can be uploaded to a key server or distributed in some other way, and can be done so in the clear without any additional encryption (except if you’re using it for integrity).

Your public key can also be sent directly from GPG to a public keyserver, as follows:

$ gpg --send-keys --keyserver certserver.pgp.com [email address of public key]

Importing a public key

On other systems the public key should be imported once retrieved. For GPG, this can be accomplished using the following command:

$ gpg --import <key file name>

When importing the public key, care should be taken to verify the key is in fact the expected key. This is usually achieved by comparing the fingerprints of the file as follows:

$ gpg --fingerprint [email address of the public key]

The output will be a string of hex quads that should be checked with the key originator.

Finally, sign the key that is imported if you are satisfied it can be trusted:

$ gpg --sign-key [email address of the public key]

Adding file encryption to file transfers: an imperfect approach

In some niche automation applications you might find yourself transferring files over TLS but still have a desire for a further layer of encryption of the file being transferred. Not so much as a strong confidentiality and integrity control, but for greater assurance. Not using PKC/certificates is also a prerequisite.

On Linux, one solution is GPG. Assuming we have one file on the sending system, the command required to automate the encryption of the file with a known passphrase would be:

$ gpg --yes --batch --passphrase=[secret] -c ${srcfile}

The output file is “${srcfile}.gpg”.

On the receiving system, the corresponding decrypt command would be:

$ gpg --yes --batch --passphrase=[secret] -d ${encfile} >
${decfile}

By default, GPG will use the AES128 cipher in version 2.1 and later, and CAST in earlier versions. You might seek to use a longer key length cipher, such as AES256. This is achieved by adding the following switch to each command:

… --cipher-algo AES256 …

Due to the number of rounds that AES will require for greater key sizes, AES 256 has a 40%+ performance penalty over AES 128, so you might find AES192 fits your requirements better.

Available ciphers depends on your implementation, and can be viewed using:

$ gpg --version

Some common ones under GPG include IDEA, 3DES, CAST5, BLOWFISH, AES, AES192, AES256, TWOFISH, CAMELLIA128, CAMELLIA192, CAMELLIA256. For large file encryption, it’s always worthwhile checking cipher performance at the outset. You might also glean some useful information from the OpenSSL speed test for your particular CPU:

$ openssl speed

The secret/passphrase is a symmetric key and ought to be as secure as possible, and given the application is M2M then transferring files using a high-strength key is a given. OpenSSL provides a suitable command to generate from a random source, for example, a 45 character base64 password:

# openssl rand -base64 45

There are some weighty drawbacks with this method. By default, passphrases passed in a command line on Linux will be viewable by other users in the system using “ps -ef” or by exploring the proc filesystem. They do not need superuser privileges to do that.

Whether data passed in this way will be suitable will boil down to the security governance of the M2M hosts sending and receiving data, and more generally the risk assessment. If they are not shared systems, it may be an acceptable with some minor additional controls (key change procedure, patch management plan, etc).

As the password is passed on the command line, this would be accessible to other users in the event either system is shared with untrusted users. Any test commands will also be deposited in the bash command line history and the passphrase will inevitably be referenced from, or stored in, a script. These are not unique to this approach: using asymmetric crypto will necessitate a private key that is equally as sensitive

A workaround to parameter visibility is to use standard input (STDIN) to input the passphrase into GPG as follows:

$ echo “[secret]” | gpg --yes --batch –passphrase-fd 0 -d ${encfile}
> ${decfile}

Even still, we are left with a script containing an embedded key. It will also not help if other users can read the script containing the passphrase, so it should be suitably secured:

$ chmod u+r+w+X,g-r-w-X,o-r-w-X [scriptname]

Finally, also ensure the locations used to write files as input and output to the process are also secured in the same way.

Passing the passphrase on the command line is not ideal and using a public key based solution and certificates with GPG is much more preferable. That brings its own issues, chief among them being the governance wrap needed around key and certificate management (particularly in enterprise environments).

Moreover, key expiry, particularly in M2M solutions, would be a significant risk to availability (think Siemens and O2).

When I get a few minutes I’ll write up the PKC-based equivalent for this post, stay tuned.

Switching to hardware-based security keys is now a reality for everyday users

Yubico Security Key supporting FIDO2

One of the interesting developments in 2018 was the announcement by Yubico of their new Security Key (SK) – a hardware-based security key that can be used in place of passwords.

As you might have guessed from my previous post, I’m not a fan of passwords.

Disadvantages are abound with password authentication. Sharing and recycling of passwords are two of the most obvious drawbacks, magnified in today’s Internet environment.

Enterprises should not rely solely on password authentication for users, and the future is most definitely going to involve more than the limited security passwords can provide. It could be argued that end-user domain logon for instance, should always involve another factor.

The Security Key is based on FIDO2, and evolution of earlier efforts by Yubico working in collaboration with Google. FIDO is an authentication standard managed by the FIDO alliance.

FIDO consists of a specification for W3C authentication together with a protocol for client communication. Both work to communicate directly with a hardware authenticator, which in this case is the SK.

FIDO avoids a number of problems commonly encountered with passwords. Apart from re-use issues, by design password authentication schemes have other drawbacks – they can be replayed time and time again, and there is no inherent mechanism in password authentication to stop it. Passwords are also potentially vulnerable to interception.

Both of these two additional issues are addressed by the Yubico SK device. The SK is a hardware authenticator supporting passwordless authentication based on public key cryptography, second factor, and multi-factor as required.

As well as existing services such as Google accounts, the FIDO2 SK also supports Microsoft accounts on Windows 10. Enterprise users can integrate the SK via Windows Hello, as a smart card integrating with a Windows CA, using keys supporting the SP 800-73 PIV interface.

What I like particularly about the Security Key is support for multiple standards from one device, which makes it a slightly different offering to traditional second-factor solutions such as PIV smart cards implemented using credential providers in Windows domain logon. All while being able to service LSA-governed PIV MFA, in, I would argue, a better form factor.

It’s an interesting time at the moment for end-user security, and solutions like Yubico’s are likely to proliferate and grow as security awareness increases. Worth a look and given the price not difficult to try it out – but remember to enroll more than one device and keep a backup in a secure location!

Maximising your password vault security

Following on from my previous post, here are my tips to making the best use of a password vault on your PC:

  • If your vault has a password generator, use it on all future passwords and over time replace as many existing passwords as possible
  • Use a strong password for any vault Internet accounts
  • Use a different and highly secure password for local vault encryption
  • Learn all the features in the tool and how to make use of them
  • Install the vault on all frequently used devices

Finally, always use two factor authentication on critical accounts even if they are managed by the vault. At a minimum ensure a second factor is used on all email accounts.

The caveat for all of the above is do so only if it meets your requirements and risk assessment. Security is not a continuum of improvement, but a scale whose tipping point is usability.

What makes a good password vault?

Password vaults are very helpful additions to a desktop environment, particularly for personal use. They can provide secure storage of passwords, synchronisation across multiple devices, and a whole myriad of other features.

What are we trying to achieve when using a vault? What are the critical high level objectives?

  • Reducing password reuse
  • Promoting regular password change
  • Increasing password complexity (by using machine generated passwords)
  • Enhancing secure storage of passwords
  • Facilitating “digital legacy”

But capabilities can vary, so what features are good to look out for in personal password managers? Here are some good features to look out for:

  • Automated form filling, ideally on user prompt
  • Two-factor authentication for the vault access account, to allow download of the password vault (without decryption of the vault)
  • Encryption of the password vault using a local password (that is not shared with the vault host)  (an important subtlety that should not be overlooked – host authentication combined with local encryption and decryption is a significant security enhancement)
  • Browser import, to take logins from browsers and store them securely (ideally removing them from browser password stores)
  • Secure sharing with other recipients
  • Password generation, including a variety of configurable parameters such as length, complexity, etc.
  • Copy and paste features to allow passwords to be copied to the clipboard
  • Browser plugins to integrate directly with the password vault and minimise use of the clipboard as much as possible
  • Synchronisation of a password vault across devices
  • Free text storage of secrets in the password vault, for instance challenge response sentences or codes
  • Digital legacy – the ability to share credentials with another person in the event you become incapacitated or unable to use your logins
  • Automatic review of passwords for strength and quality, with advisories as appropriate
  • Finally, and importantly, a broad range of browser support, including all mainstream browsers and also mobile apps

The primary benefits of using a password vault include:

  • Reducing the potential for passwords to be stored insecurely
  • Removing the risk of data loss (and therefore loss of passwords) through the use Cloud synchronisation (also a weakness)
  • Allowing highly complex passwords to be used, minimising simple password use
  • Minimising password reuse across accounts
  • Secure storage of passwords using local encryption, minimising some but not all risks of Cloud vault storage
  • Storage of related authentication data, such as secrets and challenge response codes
  • A reduction in the use of password reset procedures, possibly maintaining the use of more secure (and cumbersome) reset methods
  • The ability for others to use your passwords when you cannot
  • Automatic review of passwords for strength and quality, ensuring you are able to maintain the strongest password posture and minimising the attack surface

Password vaults present risks and issues at the same time:

  • In my mind, the primary risk is encountered when losing a password for the vault, which usually leads to the password vault becoming inaccessible. This is a risk to availability. The solution is to maintain a hard copy of the password, e.g. in an appropriately secure location (e.g. safe).
  • A lesser risk, reduced with local vault encryption, is potentially greater exposure to duplication of a vault store by an attacker due to Internet vault storage vulnerabilities. This could come about through authentication weaknesses surrounding the vault, or other means such as side channel attacks. This of course is the flip side of vault sync capability. Ideally the vault store will be encrypted with a second layer as described above, but limiting access is obviously a desirable control to put in place as much as possible. Some vaults allow for local operation only, which could be a sensible step.
  • A Single Point of Failure (SPOF) by centralising credentials into a password vault. Here is the tradeoff with convenience. A potential mitigation is to regularly duplicate the vault and store with the password backup as above.

In many personal password vault services, the security of the vault will effectively rest in a single password. This becomes more and more crucial, as the vault is increasingly used to store new credentials. Most users are unaware of just how critical the vault password will be.

Regardless of whether the vault is locally encrypted or not, always use two-factor authentication on critical accounts contained in the vault. This will go some way to mitigating Cloud risk.

Things I don’t consider a critical feature of a password vault include VPNs, file storage, and dark web credential compromise scanning. These are undeniably useful features, but may be better addressed using separate complementary desktop security products.

I’ve intentionally not covered true enterprise password vaults much in this post, for which there are several well-known vendors, solutions and use cases.

Overall, password vaults are highly recommended in the present Internet environment, and with some experience they can turn into a powerful way of improving your digital security, minimising the risks of hacking and identity theft.

Passwords are not a particularly elegant authentication method. Long criticised, we have unfortunately not seen the critical mass needed behind federated identity or as yet unspecified distributed standards to really halt their use.

Part of the challenge there is technical, such as drawbacks with frameworks such as OAuth, but also in terms of good options for Identity Providers—the usual crowd such as Google and Twitter maybe not being the best options long term.