Lessons from talking to the cybersecurity community
Lessons from talking to the cybersecurity community
Cybermundus recently participated in the YES!Delft AI/Blockchain Validation Lab. The objective of the Validation Lab was to refine our product-market fit and to validate different aspects of our business. An important element of this was to ‘get out of the building’ (‘virtual’ during the pandemic) and talk to a broad range of people from the cybersecurity community. We had discussions with some fifty people from a wide variety of organizations (from start-ups to Fortune 50 multi-nationals), industries (including retail, finance, healthcare, public sector, energy, professional services, information technology, transport) and backgrounds (business leaders, CIO’s, CISO’s, security specialists and VCs). We are extremely grateful to everybody who took time out of their busy schedules to talk to us during these unprecedented times.
This is the first article in a series to share the most salient points we learned from our discussions (whilst preserving the privacy of those interviewed). In this article we will focus on cybersecurity data related aspects.
Back then everything was better
Information security used to be simple…
During the 1960’s organizations started to add protection around their computer systems with the introduction of passwords, although the concept of a password can be traced as far back as the ‘Shibboleth Incident’ in the 12th chapter of the ‘Book of Judges’:
“Then said they unto him, Say now Shibboleth: and he said Sibboleth: for he could not frame to pronounce it right. Then they took him, and slew him at the passages of Jordan.”
In addition to the use of password, most security measures where of a physical nature, like access and fire protection, as the internet still had to be invented…
The first computer virus -not designed to be malicious- can be attributed Bob Thomas, who wrote an experimental computer program in 1971 named ‘Creeper' designed to move between mainframe computers.
‘I’m the creeper, catch me if you can…'
Bob Thomas, Researcher BBN Technologies
From there on, life of a cybersecurity professional got a lot more complicated, with security incidents growing rapidly year over year. As computer systems get more ubiquitous and interconnected, organizations get even more dependent on them and adversaries get more sophisticated, the work of the cybersecurity department becomes ever more critical. In the broad field of security, attackers have an ‘unfair’ advantage over defenders.
As the ‘digital estate’ of organizations grows, so does the amount of security related data generated by this digital infrastructure. The top friction we observed during our interviews was the overabundance of security related alerts and events and the continuous struggle Security Operations Centers (‘SOCs’) have to analyze and to respond to it. As a matter of fact, in none of the discussions did our respondents indicate that that their organization felt they had adequately solved this issue.
‘Organizations are nearly universally overwhelmed by the number of cybersecurity related events and alerts. This hampers their ability to determine whether security measures are adequate and the resulting organizational risks are acceptable.'
From our discussions, we also observed that Advanced Persistent Threats (‘APTs’) are growing in number and sophistication. Furthermore, threat actors increase the time spent between their initial breach and reaching their objective, quietly navigating the system to elevate their privileges. The average time between breach and remediation is estimated at almost 300 days1.
Well, you probably say, what about the use of modern analytics tools, like machine learning (ML) on this ever growing data set? Right, there are an increasing number of ML-enabled tools in this space, and we asked about them too. It turns out that about half of the organizations we spoke to use them, but with mixed success. An often heard complaint is that even with the latest generation of tools (some of the companies behind them had very material funding rounds) the large number of false positives is seen as a significant hurdle to realizing their full value potential. One company we spoke to mentioned they had ‘switched off’ the ML-based solution due to this and another stipulated that their environment was ‘too complex’ to fully benefit. Feedback on ML-based solution performance on false negatives was a lot harder to obtain, for obvious reasons. False negatives are a big concern as they result in real threats going unnoticed. Bottom line, there are no silver bullets for the ‘buffer overflow’ issue SOCs are dealing with.
Another important conclusion we extracted from our discussions is the lack of structured exchange of security related data between organizations. There are providers of threat intelligence data of course and discussions between members of different organizations related to cybersecurity threat intel and breaches take place, but not at the frequency and intensity you would expect based on the value potential data sharing brings. One of the key reasons quoted for holding back on this is the sensitivity of admitting to security related incidents.
We plan to discuss the merits of collaboration on security related data between organizations in a privacy preserving manner in one of the articles in this series. We are also working towards a Proof of Concept in this area, so please reach out to us if you are interested to learn more.
For now, stay healthy, stay secure!